Journal of Archaeological Method and Theory

, Volume 20, Issue 1, pp 102–150

What Was Brewing in the Natufian? An Archaeological Assessment of Brewing Technology in the Epipaleolithic


    • Archaeology DepartmentSimon Fraser University
  • Neil Canuel
    • Archaeology DepartmentSimon Fraser University
  • Jennifer Shanse
    • Archaeology DepartmentSimon Fraser University

DOI: 10.1007/s10816-011-9127-y

Cite this article as:
Hayden, B., Canuel, N. & Shanse, J. J Archaeol Method Theory (2013) 20: 102. doi:10.1007/s10816-011-9127-y


It has long been speculated that increasing demands for cereals for the purposes of brewing beer led to domestication in the Near Eastern Natufian cultures. While the question of whether cereals were being used in beer production is an important issue, it has remained a difficult proposition to test. We present some new perspectives on traditional brewing techniques relevant to this issue, on archaeological remains, and on the paleoecology of the Near East. Taken together, these observations provide more compelling circumstantial evidence that makes it increasingly likely that brewing of beer was an important aspect of feasting and society in the Late Epipaleolithic.



This article addresses the problem of when brewing began in the Near East. This topic may initially appear as a more mirthful rather than serious area of scholarly research, yet it is related to a number of critical theoretical issues such as the role of feasting in early community dynamics and the possible reasons for cereal domestication. Sauer was one of the first scholars to have asked whether the domestication of cereals was for the purposes of brewing beer rather than for basic subsistence purposes (Braidwood 1953:515). Braidwood took up the question but was unable to resolve it. Katz and Voigt (1986) attempted to develop the argument somewhat further, but again had no conclusive evidence. In a similar vein, McGovern (2003:10; Tucker 2011:41) voiced his strong suspicions that brewing began in Natufian times or earlier (c. 13,000–9,800 cal BC) even though there was no definitive proof such as might be provided by residue analysis of ground stone containers. In the European context, Fischer and Kristiansen (2002:373, 376–378) have argued that initial grain production was undertaken to provide luxury foods for feasts, particularly beer and bread. Hayden (1990:46; 2004:274; 2009) also proposed that brewing for feasts was what may have motivated Natufian-related groups in the Near East to begin the domestication of cereals.

There is a general consensus that the Late Epipaleolithic (especially the Natufian) or derivative Pre-Pottery Neolithic A (PPNA) cultures provided the staging ground from which cereal domestication took place, if not the actual context of the first cultivation and barely detectable genetic changes. Thus, the issue of whether brewing was occurring in the Late Epipaleolithic of the Near East is of considerable theoretical importance. While we still have no definitive answer to this question, we feel that a more detailed exploration of the plausibility of brewing in the Late Epipaleolithic, especially the Natufian, is warranted given what we now know of the technology and resources used at that time. First, however, we would like to mention some relevant observations about the cultural contexts.

From a global comparative perspective, it can be noted that there appear to be no ethnographic examples of simple pristine hunter/gatherers who made alcohol. This may be for some very good reasons such as the lack of suitably sized containers or boiling technology, the high mobility of groups, or other logistical factors. It appears to be only with the advent of semi-sedentary or sedentary complex hunter/gatherers, such as those in Southeast Australia, that the first accounts of some alcoholic drinks begin to appear (e.g., Smyth 1972:347–348 and Dawson 1981:21 both cited in Builth 2002:331). Among the complex hunter/gatherers of southern California, there is an observation from 1775 of “wine” being produced from the fruit of a native tree (Fages 1937:22). Similarly, alcohol production has been documented for the Ainu complex hunter/gatherers of Japan who use millet beer as a sacred drink in ceremonies and feasts (Munro 1963:69, 91, 149, 170). Consumption of alcohol has also been inferred for the Jomon (complex hunter/gatherers who were the predecessors of the Ainu; Habu 2004:208). Among sites of Mesolithic complex hunter/gatherers in northern Europe, round-bottomed ceramic pots were found to contain fermented residues (Hulthen 1977). The use of grass beers has also been inferred for early Woodland groups in North America (Schoenwetter 2001:278–279). Overall, brewing is only sporadically documented for ethnographic complex hunter/gatherers. This is not surprising since many of them lived in the arctic, subarctic, hyperarid, or coastal rain forests where grains or berries with high sugar contents are scarce. However, complex hunter/gatherers used to be much more widespread in temperate zones, including the Near East where easily fermentable grains and berries were much more abundant.

Brewing is also extremely common among most simple horticulturalists and agriculturalists, and it is almost universal among those who grow grains. Moreover, the brewing of alcohol seems to have been a very early development associated with initial domestication in early Neolithic China (c. 7000 BC; McGovern et al.2004; see also Crawford et al.2005:315), in the Sudan (c. 9000 BC; Rojo et al.2008:16; Haaland 2007:172), with the first pottery in Greece (Perlès 1996:44), and possibly with the first use of maize in Mesoamerica (Smalley and Blake 2003).

Within the Near East, the PPNA “kitchen” at Jerf el Ahmar (c. 9500 BC), with its large stone basins and polished disks, grinding stones, and hearths, all directly associated with barley remains, has led to a number of suggestions that brewing of cereals was taking place at this site (Stordeur and Willcox 2009:704–706; Haaland 2007:174). Similar kinds of kitchen or brewing furniture have been reported from the PPNA site of Tell ‘Abr 3 (Yartah 2005:5). In addition to these archaeological remains, the analysis of genetic diversity of European and Middle Eastern yeasts indicates that bread and brewing yeasts originated in the Near East and probably diverged from wild forms around 10–12,000 years ago (LeGras et al.2007:2100). These results indicate a very close affiliation between bread and beer (vs. wine) yeasts. Thus, it seems most likely that the timeframe within which brewing of cereals started in the Near East is bracketed on the one hand by the emergence of complex hunter/gatherers, as probably represented by a number of closely related Epipaleolithic complexes (c.13,000 cal BC), and on the other hand by the initial stages of domestication (or simple cultivation) as represented by the PPNA at Jerf el Ahmar (c. 9000 cal BC), where, incidentally, no morphologically domesticated grains occur (Willcox 1998, 1999:492, 496; 2011; Tanno and Willcox 2011).

We propose to examine a number of more detailed hypotheses or expectations related to the contextual conditions for possible brewing during the Late Epipaleolithic. The hypotheses that we have developed involve: (1) the technical constraints involved in brewing, (2) the technological pre-adaptations necessary for brewing in the Late Epipaleolithic, (3) the suitability of the first domesticated cereals for brewing, (4) the dietary importance of cereals in the Epipaleolithic and PPN, and (5) the expected social (particularly feasting) contexts for brewing.

It should also be pointed out that beers made in traditional tribal or village societies generally are quite different from modern industrial beers. As discussed in more detail later, traditional beers often have quite low alcohol contents (2–4%), include lactic acid fermentation giving them a tangy and sour taste, contain various additives such as honey or fruits, and vary in viscosity from clear liquids, to soupy mixtures with suspended solids, to pastes.

We turn now to the examination of the hypotheses.

Hypothesis 1: Technical Factors in Brewing

If brewing began in the Natufian and related cultures, then they should have been capable of implementing the necessary procedures. In order to understand the brewing constraints and possible technological challenges that would have existed in Natufian times, we provide a brief overview of brewing mechanics, critical steps, and possible alternative procedures for producing alcoholic drinks. The fundamental process of creating alcohol from fermenting grains has not changed in millennia. The starches inherent in the grains are converted into sugars by enzymes. Yeasts then consume these sugars and produce ethanol. The traditional approach to reliably reproducing this process generally involves three basic stages: malting, mashing, and fermenting. These technical requirements for brewing should not have been excessively difficult for people in Late Epipaleolithic cultures to meet or master.


In the first phase of traditional beer brewing, grains are immersed in water until they begin to germinate and are then drained and dried. This process of “controlled germination” produces the enzyme α-amylase in the seed and activates the β-amylase enzyme, which is naturally present in the raw, ungerminated grains (Hornsey 1999:21, 37; Hough et al.1982:57–104; Moll 1979:54–64). These enzymes are collectively known as the diastase, and the ratio and quantity of enzymes in a particular grain are considered the “diastatic power” of that grain type. After being drained, the germinated grains (or “chit”) are dried. This stops the germination/saccharification, enhances the enzyme activity, and stabilizes and coagulates the proteins in the grain to provide both clarity and the distinctive malty flavor (Hornsey 1999:28). At this point, the grains can be stored until required.

Although the primary role of malting is to produce and activate the enzymes, this process also erodes the cellular barriers that protect the starch, which will later allow the enzymes to work with considerably less resistance (Autio et al.2000). This feature of malting is particularly crucial to brewing, and Hornsey (1999:27) explains it as the “breakdown of the endosperm cell wall” which allows the enzymes much more complete access to the starch grains during mashing.


The second stage of beer brewing is mashing. In this stage, the enzymes created during malting actively convert the majority of the grain starches to sugars. In order to facilitate the action of the enzymes, the malt is coarsely ground or crushed, dry or wet, before it is mixed with water. This creates smaller grain particles which provide greater surface area for the enzymes to work with; thus, the starches in the grains become more susceptible to conversion to sugars. Today, the ground grain is placed in a container called a “mash tun” and water is added. The resulting liquefaction or “gelatinization” of the starches causes their cells to unravel from their matrix, which then gives the enzymes a chance to attack each of these cells independently (Briggs et al.1982:254–290; Hornsey 1999:29–30; Hornsey 2003). Grinding and liquefaction both work to allow the diastase enzymes to rapidly attack the small starch particles and convert them into the simple fermentable sugars, sucrose, and raffinose (Hough et al.1982:254–290).1

Temperature control is a vital element of mashing. In one of the most common and simplest mashing techniques, water is heated to the desired temperature and then added to the malt. The most efficient enzyme activity occurs between the ranges of 50–55°C for proteases, 64–68°C for α-amylase, and 60–65°C for β-amylase (Hornsey 2003:13; Hough et al.1982:278).

Temperature Control in Early Mashing

To achieve an optimal temperature range during mashing, historical brewers without the use of thermometers mashed “at a temperature at which the [brewer’s] face was best reflected” in the water, which would have been between 65°C and 70°C (Hornsey 1999:31). This same indicator, or similar ones, could also have been easily used by early brewers.

Once this temperature was achieved, maintaining it over a period of time would have been necessary (Kavanagh 1994:9–10; Hornsey 2003:17–18). The length of time required to mash would have depended on the grain to water ratio and the particular recipe, but at optimal temperatures today, on average, it is maintained from 30 min to 4 h (Hornsey 1999:38). Early brewers without ceramic technology to boil water could have achieved this by placing the mash inside a ground stone or waterproof organic container, such as wood, and gradually adding heated stones. Once they reached the optimal temperature, perhaps by observing the water’s reflective properties, they could use smaller heated rocks as needed to maintain the temperature (Hornsey 1999:31; Briggs et al.2004:190–194). During the Natufian, stone boiling was probably used for extracting bone grease or nut oils and was likely used for preparing cereal gruels as well (see “Hypothesis 2”). Therefore, this was probably a familiar technology that could easily have been adapted for brewing.

Katz and Voigt (1986:32) have also noted that in the Middle East, where the “daily summer temperatures can reach 120°F [49°C] or more, there may have been little need to heat the brew.” Although this is below the optimal temperature, it would have been sufficient for β-amylase conversion, which has a wider effective temperature range (Hornsey 1999:31–32; Briggs et al.2004:190–194).

Alternatives to Malting and Mashing

The saccharification process, by which starch is converted to sugar that can be used by yeast for alcohol production, would have been a necessary step for early brewers working with cereals. The use of malting and mashing to create and utilize diastase to convert starches into sugars dates back to the earliest definite historical and archaeological evidence for brewing, ca. 4000 BC (Samuel 1996:5, 8; Samuel and Bolt 1995; Katz and Voigt 1986:29; Jennings et al.2005). However, other effective methods of saccharification exist and are regularly employed, including the use of bappir, molds, and grain mastication.

Bappir (beer bread), or manna bread, has long been hypothesized as an alternative for the mashing process (Katz and Maytag 1991; Samuel and Bolt 1995:27; Jennings et al. 2005:280). This theory is based on interpretations of historical Sumerian and Egyptian texts, such as the Hymn to Ninkasi, cylinder seals, and hieroglyphics including those found on pottery sherds and tomb walls (Katz and Maytag 1991).

To create bappir, ground-malted grain was mixed with water and possibly adjuncts for flavor, and formed into a loaf. The loaf was then slow cooked in order to achieve temperatures suitable for the enzymes in the malted grain to break down the starches into sugars. Some archaeologists have argued that these loaves may have also been inoculated with a significant amount of yeast (a fungus within the Eumycota family), effectively producing a starter for a brew which could either be used immediately or stored for future use (Samuel and Bolt 1995:27). When it was required for brewing, the bappir likely would have been broken up, mixed with water and strained into a fermentation vessel (Dineley 2004; Katz and Maytag 1991; Jennings et al.2005:280).

Dineley (2004) has conducted a series of experiments which demonstrate the potential for this alternative to mashing, which effectively eliminates the need for a mash tun and the challenges of maintaining the temperatures necessary for the diastase to be effective. However, to date, the archaeological evidence has not been able to substantiate the process of using bappir, and it is still the subject of some debate. Samuel (1996) has conducted extensive research into the analysis of starch residue in pharonic Egypt (1550–1070 BC), which demonstrated the morphological breakdown of cereal starches, particularly einkorn and barley, into sugars. Through his analysis, he has concluded that the process of using bappir was not a factor, at least in the domestic brewing process (Samuel and Bolt 1995:31).

Whereas bappir still requires malted grains to provide the enzymes, the use of certain molds in saccharification can obviate the need for both the malting and mashing processes. Molds have a long history of use in fermenting various foods around the world, such as milk, cheese, and sausage (Law 1997; Mintzlaff et al.1972). One of the most well-known cases is the use of the Koji mold in the Japanese production of sake, soy sauce, and miso. Koji is a filamentous fungal inoculum made up of Aspergillus oryzae, Aspergillus kawachii, Aspergillus shirousamii, Aspergillus awamori, and Aspergillus soyae cultures (Kitamoto 2002; Steinkraus 1983). This mold produces protease and amylase enzymes which then break down existing starches into dextrins, glucose, and maltose (Djien 1982:30; Kitamoto 2002; Kodama and Yoshizawa 1977; Steinkraus 1983).

There are a number of other molds including some yeasts which have similar saccharification properties, including the mold species Mucor and Rhizopus and some strains in the yeast genuses Candida, Endomycopsis, and Saccharomyces (Djien 1982:27–28). In particular, the mold Mucor can produce ethanol directly from starches. However, the amount of alcohol produced by this mold does not rival that produced by the Saccharomyces cerevisiae yeast (Lübbehüsen et al.2004) and thus seems unlikely to have been used in initial brewing attempts.

More commonly, amylase enzymes can also be found in human saliva (van Houte 1983:S659). Thus, a mixture of grain and water can be inoculated with amylase simply by chewing some of the ground grains and then introducing these masticated grains to the mixture (Jennings et al.2005; Cutler and Cardenas 1947; Poo 1999:132–133). In some cases, brewers continually remove bits of grains from the mixture for mastication and then reintroduce them to increase the concentration of enzymes and therefore the speed of fermentation and the diastatic power (Hayashida 2008:165). The mixture is then heated to approximately 50–60°C and cooled, sometimes multiple times depending on the recipe, so that the amylase can have its maximum effect (Driver and Massey 1957:268; Jennings et al.2005:279).

Grain mastication has been used around the world as an ancient saccharification method. In Japan, some sake production has incorporated mastication, as have both Japanese and Chinese li-wines (Kodama and Yoshizawa 1977; Poo 1999:132–133). Likewise, mastication is used in the brewing of traditional manioc beers in Brazil (kaschiri) and Mozambique (masata; Pederson 1979; Steinkraus 1983:344). The production of chicha in many areas of Latin America also incorporates the chewing of maize in the saccharification process. In the case of chicha production, women are recognized as the primary producers involved in all aspects of the brewing process from grinding, saccharification, and fermentation, to serving (Jennings et al.2005; Hayashida 2009:245). The amount of chicha and the alcohol levels produced can vary greatly; 1 kl of maize can yield between 1.16 and 6.44 l of beer containing from 1% to 12% alcohol (Jennings 2005:244–246). However, 1 kl of maize typically yields 1.2–1.6 l of chicha with 2–5% alcohol depending on the recipe (Jennings et al.2005:246).

In the experiments conducted by Canuel (discussed below), mastication was much less efficient in converting starches to sugars than malting. Though this does not rule out mastication as a possible means of saccharification in Late Epipaleolithic beer brewing in the Near East, it seems that there were other alternatives which would have required less time and effort to achieve a higher alcohol content.


In the final stage of modern Western brewing, yeast is added to, or allowed to naturally settle on, the wort. Under anaerobic conditions, the yeast consumes oxygen together with simple sugars and produces CO2 and ethanol (Hough et al.1982:644–684; Hornsey 1999; Munroe 1995). Yeast requires anaerobic conditions or it will reproduce rather than simply consume. Although this process must have been recognized very early, it was not understood in detail until the mid-nineteenth century when Bavarian monks and German master brewers slowly began improving fermentation in countries such as Czechoslovakia and the Netherlands. Louis Pasteur was the first to recognize and demonstrate the biological basis of fermentation and apply microbiological theory to yeasts (Hough et al.1982; Hornsey 1999:165).

Fermentation creates an atmosphere that is so hostile to microbes that after a time even the yeast itself cannot successfully survive (Hornsey 1999:119; Briggs et al. 2004:438). The exact amount of alcohol produced is determined by the individual yeast strain used. Conventional brewing yeast averages 3–6% alcohol in a typical grain brew with up to 10% for specialized high gravity yeasts, whereas resilient wine yeasts can produce 10–15% alcohol (Briggs et al. 2004:438–440; Jennings et al. 2005; Munroe 1995). In fact, as with chicha, many traditional beers have surprisingly low alcohol contents, often ranging between 2% and 4% (Hornsey 2003:8, 21).

The most widespread yeast used for brewing is S. cerevisiae, also known as brewer’s or baker’s yeast. Archaeological evidence has uncovered traces of yeast, and sometimes even identified yeasts, in the residue analyses from containers used in early brewing (Samuel 1996; McGovern et al. 2004). The DNA extracted from ancient wine vessels that date back to 3150 BC provides the earliest physical remains for domesticated strains of this yeast (Fay and Benavides 2005:66, 69). However, based on DNA analysis of modern strains and their relative divergences from genetic mutations, it has been estimated that the domestication of the S. cerevisiae strain occurred much earlier, approximately 11,000 years ago (LeGras et al.2007:2100). This strain is commonly found in honey or on the skins of high sugar fruits, such as grapes (Fay and Benavides 2005; McGovern 2003:80; Hornsey 2007:2–4; Wadley and Martin 1993; Mortimer 2000; Redzepovic et al. 2002; Naumov et al.1998). The addition of honey or fruits, either fresh or dried, that already have this yeast present would inoculate a brew and allow for fermentation if all other factors were favorable. However, airborne wild yeasts exist in many environments, and while not all of these yeasts will produce alcohol, there are brewers that still rely on naturally available yeasts to ferment traditional alcohols. Natural yeast inoculation could also occur via insects such as the genus Drosophilia (fruit flies), contamination from processing tools (grinding stones, mortars, or baskets), or the introduction of pre-inoculated elements into the brew (Phaff and Starmer 1987; Sniegowski et al. 2002; Hornsey 2007:11–74).

S. cerevisiae is also known to co-exist with Saccharomyces paradoxus in the sap exudates of oak trees and on the shells of acorns (Koufopanou et al.2006:1941; Redzepovic et al.2002). The S. paradoxus strain is the identified wild relative of S. cerevisiae and is considered by some to be its “natural parent species” (Redzepovic et al.2002:306; Koufopanou et al.2006). The alcoholic fermentation qualities between the two are virtually identical (Sniegowski et al. 2002:305). It seems entirely feasible, therefore, that mortars and pestles or grinding stones used to process acorns could have become inoculated with S. paradoxus or S. cerevisiae, and the yeast could have then spread to cereal grains that were dehusked, ground, or cracked with the same implements (Hornsey 2003). Similarly, the transfer of yeast may have happened during storage of grains since fruit or acorns stored in the same structure could easily inoculate any adjacent cereals. These scenarios would result in an unintentional but repeatable process that, due to the storability of both grains and acorns, would likely have made year-round alcohol production possible.

Any containers used for fermentation would also become inoculated with yeast, and would inoculate any subsequent wort. Similarly, if a lid was used to protect the fermentation, this would also become exposed to a large amount of yeast. The lid could have been carefully, even ritually, stored and reused time and again to inoculate brew after brew. Thus, fermenting containers could be reliably reused for successive brews whenever they were desired or required.

Lactic Acid Fermentation

The inclusion of lactic acid fermentation in the brewing process was likely not an intended outcome, but lactic bacteria can exist naturally in malted grain and could cause lactic acid fermentation. This souring process is a key factor in traditional African brewing, though it is relatively rare in Western beer. In industrial brewing, lactic acid fermentation is accidental, occurs in very low amounts, and is undesirable because of the Western aversion to the taste, though it produces no harmful effects. In the African brewing process, the high amounts of lactic acid are deliberate and calculated to provide a unique taste that is valued and not detrimental to the overall flavor.

Lactic acid fermentation is most prevalently induced by the lactic bacteria Lactobacillus, Leuconostoc, and Pediococcus (Oyewole 1997:291). The lactic acid they produce creates a very distinctive flavor and effectively “‘controls’ the rest of the brewing process” (Haggblade and Holzapfel 1989:319). It also helps prepare the grains for liquefaction by softening their proteins and making them more suitable for digestion. This added acidity means the pH is lower, which helps protect the wort against unwanted bacteria but restricts the conversion of starches to sugars by the enzymes. This results in the beer having a low alcohol content, because reduced amounts of sugar result in less ethanol during alcoholic fermentation (Haggblade and Holzapfel 1989:319).

Lactic acid fermentation may be a distinctive trait of African beer, but it has also been used in other areas and may have characterized early Near Eastern beers. Lactic acid bacteria are recognized for their beneficial roles in controlling diseases, increasing nutrition, and detoxifying foods (Oyewole 1997:292). As part of the brewing process, lactic acid fermentation would result in increased nutrients in the finished beer (Haggblade and Holzapfel 1989:327; Steinkraus 2002:30).

Possible Early Brewing Processes

On the basis of ethnographic observations, it is apparent that malting could have been achieved in a number of ways. The grains may have been left in water until they began to germinate, and then dried in the sun (Arthur 2003:517; Katz and Voigt 1986:32). Alternatively, the grains could have been hand collected from the ground at the time they were germinating (per Kislev et al. 2004) and used immediately or dried. Stored grains may have also germinated by mistake because they were stored under humid conditions. Sprouted grains may also have been used to make traditional “manna” breads or gruels. It seems likely that due to natural variations or accidents in food preparation and consumption, proto-brewers would have discovered that grains in the process of germination could result in sweetish or tangy, fermented, alcohol-containing mixtures, whereas ungerminated grains would result in blander, more starchy products.

The process of mashing would likely have been similar to processes for making gruel, which is theorized to have preceded both beer and bread (Katz and Voigt 1986:23). Early experimenters would have needed to mix the ground-up, germinated grains with water, heat it, and then let it cool. The ideal temperature would have been approximately 65°C for the primary enzymes, but there are other enzymes that actively convert starch to sugar at lower temperatures. Once alcohol was produced through the fortuitous combination of optimal conditions, early brewers could have easily found a process that created an adequate enough amount of alcohol by experimenting with how much time the liquid was exposed to heat.

Early fermentation could have been easily discovered and achieved by leaving a gruel cum mash out overnight as done today in Africa (Angwenyi Ogake, personal communication). Airborne yeast could have inoculated the mixture and begun the fermentation process with formation of a “kraeüsen” (surface spume) by the next morning. Though the kraeüsen may not have looked appetizing, it is clear that most traditional people experimented actively with over-aged or fermented foods.

Early brewers did not need to know why beer was produced in these stages, but they would likely have been able to determine the basic necessities for brewing: the grains required germination, the gruel required heating and was best if it was thin, and fermentation required that the mixture sit for one or more days. Additionally, once the process had been successfully repeated a few times, one might assume that early brewers would use the same container, which may have seemed to magically create the first brew, and thus inoculated subsequent brews. As a result, the process would be less reliant on airborne yeasts or chance and therefore could be more predicable in its reproduction. Ethnographers have noted that African brewers utilized the stocks of previous fermentations to inoculate later brews, as evidenced by Kenyan brewing of uji, mahewu/magou, and Nigerian ogi, thereby ensuring the quality and reproducibility of these fermented foods (Oyewole 1997:289, 291; Steinkraus 2002:26–27). Similar practices may well have been followed among the first brewers.

A single container could have been used to produce even large volumes (i.e., 5 l or more) of beer in a prepottery society. For instance, the larger ground stone mortars or even limestone basins we document under “Hypothesis 2” could have been used for all the brewing phases: malting, mashing, and fermenting.

In sum, there appear to have been no significant constraints or impediments in the technical requirements for brewing beer in the Epipaleolithic. Moreover, the basic requirements for malting, mashing, and fermenting would have likely been met though various methods and technologies already utilized by Epipaleolithic complex hunter/gatherers.

Hypothesis 2: A Suitable Late Epipaleolithic Technological Infrastructure

It is unlikely that brewing would have developed without the prior existence of suitable technological prerequisites. Therefore, if brewing occurred in the Natufian, we should expect technological pre-adaptations, or “exaptations,” for basic brewing necessities and procedures to occur in the Late Epipaleolithic. In fact, we find evidence of several “exaptations” which could predispose complex hunter/gatherers to experiment with, and consequently develop, brewing. At the most fundamental level, the technology for harvesting and processing cereal grains en masse (including sickles, baskets, storage facilities, mortars, grinding stones, and boiling stones) would have been fundamental to the brewing process. Sickles are well documented, but other aspects merit some discussion.

Storage Requirements

For year-round brewing to have been possible, some sort of grain storage would have been necessary. Surpluses would have been required to use for brewing; and storage, by itself, usually entails the production of surplus staples in order to deal with uncertainties in future production and the risks involved in storage. The large size of some Natufian sites (ranging into the hundreds of occupants), the high overall population density, the high degree of sedentism, the effort expended in the procurement of prestige items, the maintenance of domesticated dogs, and the expansion of the Natufian culture during the Epipaleolithic all argue in favor of a relatively rich subsistence base with ample surpluses, at least in favored environments (see Hayden 2004, 2011b).

Perrot (1966:460; Perrot and Landiray 1988:4–5, 13–14) reported multiple large storage pits at the Natufian site of Ain Mallaha, while large pits suitable for storage also occur at Wadi Hammeh 27 and Rakefet (Edwards 1991:125; Nadel and Lengyel 2009). Alternatively, some of the unusually small stone structures documented at sites like Mallaha, Hallan Çemi, and Hayonim Cave (structures only 1.5–2.0 m in diameter or less; Valla 2008:72, 106; Rosenberg and Redding 2000; Bar-Yosef 1991:84) were unlikely to have been residences, but could have served as storage facilities. While Bar-Yosef thinks the small structures in Hayonim Cave were unlikely to have been used for storage due to the lack of plastering, others view them as storage facilities (Tomenchuck 1983), and the hearths in them could have been used for smoking or drying foods stored above rather than for any postulated domestic functions. In addition, Bar-Yosef (1991:89) did recognize a limestone slab storage feature on Hayonim terrace reported by Valla et al. (1989) and postulated the use of baskets for storage as well (Bar-Yosef 2002:110, 112).

Mortars and Containers

Among the few basic tools and devices required for successful brewing, watertight containers would have played several roles. If beer was made during the Natufian or PPNA periods, a key challenge would have been the creation of suitable containers. The fact that some sort of watertight containers existed in the Epipaleolithic is indicated by the processing of animal bones for grease, apparently by boiling (Munro and Bar-Oz 2004). Ground stone or hollowed wood containers would have been particularly advantageous in retaining liquids and heat for long periods during the mash, especially compared to baskets or hide containers (Kavanagh 1994). The size of the mashing container could vary greatly, so long as it could hold an adequate amount of grain and water. The same container used for mashing could readily have been used for fermenting. Although yeast requires anaerobic conditions to produce ethanol, the fermentation container itself would not have to be airtight. During fermentation, the production of both CO2 and a “kraeüsen” (the frothy barrier on the mixture’s surface) would insulate and protect the wort from outside air and contaminants. Wooden or other organic lids could also help retain the carbon dioxide, exclude oxygen, and protect the wort from foreign matter and contaminants. Grinding stones, or perhaps more likely, stone mortars with stone pestles (for crushing rather than dehusking grains, per Wright 1994:243), would have been other tools necessary for preparing the malted grain for brewing.

Within the wide range of forms classed as “mortars,” at least some of the deep mortars recovered from Natufian sites resemble brewing pots used by hill tribes in Southeast Asia in volume, shape, and mouth apertures (Fig. 1; Hayden 2004:275). Moreover, considering that stone pestles tend to crush grains (Meurers-Balke and Lüning 1999:249), objects referred to as “mortars” associated with stone pestles (e.g., at Wadi Hammeh) were probably not used for dehusking grain (Wright 1994:243). Some of these mortars and pestles may have been designed and used for crushing nuts but they would also have been ideal for coarsely crushing grain either for cooking or brewing. In contrast, finely ground flour would create a highly gelatinized wort that would impede fermentation. Other Natufian stone receptacles, particularly the long narrow types and some bedrock and boulder types of “mortars,” were so narrow that they could not have functioned as mortars (Nadel and Lengyel 2009; Valla 2009). However, the narrow forms could have conceivably been used in brewing since their form would favor the retention of carbon dioxide above the wort during brewing and thus help maintain the necessary anaerobic conditions for fermentation. Straws could have been used to consume brews from the narrow receptacles. Even normal mortars that may have been used for dehusking grain with wooden pestles could also occasionally have been used for boiling liquids or worts since they were entirely suitable for such purposes and, if already in place, would have avoided the extra effort required to make water-tight organic containers.
Fig. 1

a,b Examples of Natufian “mortars” from Jebel Saaïde, Lebanon (photos courtesy of Bruce Schroeder). c Several examples of brewing vessels currently used for brewing rice beer among hill tribes in Southeast Asia. Such brewing vessels range upward in size to 1.2 m, and, like Natufian mortars, are generally imported from considerable distances at considerable costs and treated like prestige items of great value (photo by Hayden). The general size, morphology, and aperture of the vessels appear similar to some of the “mortars” recovered from Natufian sites

The smaller rock mortars or depressions that occur at some sites like Raqefet could have been used to germinate grain or pound it into malt (Fig. 2). Several boulder mortars analyzed in detail by Valla (2009:16) were set on angles inappropriate for use as mortars, and appear to have contained “fluid” material with residues (such as lees or malted grain?) scooped out with bone spoons. The famous and problematical bottomless “stone-pipes” with 30 cm mouth diameters used in funeral contexts (Kaufman and Ronen 1987) could also conceivably have been used in beer production if their bottoms were sealed with clay for brewing and then cleaned out when the seal was removed. In fact, if reed tubes were inserted through such a clay base and stoppered, fermented beer could have been drawn off as a relatively clear liquid as done in modern brewing due to the filtration provided by the grain bed. Beer drinking from straws inserted into the top of the wort would be cloudy or murky or contain particles with a more bitter taste. Deep wooden mortars could also easily have served as brewing pots. Thus, a number of types of stone vessels recovered in Natufian sites would have been suitable for malting and brewing, as well as possible bark, basket, hide, or wood containers. For the purposes of storing and serving both beer and wine, narrow spouted ceramic vessels or the “wine skins” still used in the region for storing and serving, were preferred in order to minimize spillage and contact with microbes in the environment (Michel et al. 1993; McGovern 2003:55). The presence of narrow spouted ceramic vessels in Japanese Jomon sites has been used to infer the production of alcohol prior to agriculture there (Habu 2004:82).
Fig. 2

Some of the stone bedrock “mortars” from Raqefet Cave that illustrate the unusually narrow and deep cavities as well as the copious volumes of some of these features and artifacts (courtesy of Nadel and Lengyel (2009))

In many regions of the Mesolithic world, the primary use of large mortars and pestles may have been the mass processing of nuts. From periods spanning the Kebaran to the Neolithic, mass mast processing is well attested to (at least in northern sites such as Abu Huyreyra, Mureybit, Wadi Hammeh, and Jerf el Ahmar) by the sometimes abundant archaeological occurrence of shell fragments of Pistacia’s and almonds (Martinoli and Jacomet 2004; Barlow and Heck 2002; Willcox et al.2009:153; Helmer et al.1998:27). Perhaps acorns were also used since they were used in the preceding and succeeding periods (McCorriston 1994:98). Pistacia’s, in particular, constitute one of the most ubiquitous and abundant species from which plant remains have been recovered. Some of the wild forms of nuts like almonds or acorns have toxic constituents that can be removed by crushing and treating with water or heat. Other wild nuts are so small or have such dense shells that mass processing is required to use them, as in the case of Pistacia atlantica which was especially abundant in the predomestication botanical assemblages noted above (Willcox et al. 2008:319–320). One way of processing small or toxic nuts was to crush them in mortars and then boil the crushed material in order to extract the oils (almost half of the dry weight of nuts are lipids in many species such as Amygdalus orientalis [almonds]; Martinoli and Jacomet 2004:50, 52). This same crushing and boiling extraction technique was a widespread feature of many native ethnographic groups in North America (Taché et al.2008:66). Due to the labor-intensive nature of producing nut oil, it was generally a highly valued food. According to ethnographic sources, about 100 kg. of walnuts yielded 8 l of oil (ibid). And, when the Californian hunter/gatherers obtained oil from wild Prunus seeds this “did not merely represent a more or less nutritious, storable resource, but was also important in the social life, as a readily available gift or exchange product” (Martinoli and Jacomet 2004:52). In fact, in Anatolia, wild almond seeds and P. atlantica nuts are still gathered and rendered into oil and made into drinks that resemble coffee (ibid; Willcox et al.2008:319). In their freshly extracted boiled form, the emulsified nut oils make a creamy and tasty drink. A clear, sweet oil without bitter tannins was also produced from acorns in North America (Taché et al.2008:66) and could have been produced as well by Near Eastern Natufian groups.

Thus, the archaeological occurrence of nuts in abundance, at least in some northern Natufian sites, may indicate the production of nut oils or drinks which involved crushing in a mortar followed by a prolonged boiling process. This would have been identical to the crushing and boiling procedure used with grains for brewing. The adoption of brewing in Natufian times may explain why mortars and pestles (and perhaps grinding stones for coarsely grinding the chit or making bappir) are relatively rare until Natufian times, despite the apparent use of cereals beginning in Kebaran times (or even earlier) when cereals may have been simply dehusked in wooden mortars and boiled as whole grains without any need for grinding.

Additional Existing Circumstances

Stone boiling technology has been represented by considerable amounts of fire-cracked rock and fragmented grease bones in Natufian sites (Weinstein-Evron et al.2007; Munro and Bar-Oz 2004). Given the harvesting of cereal grains, it can be expected that porridges or gruels of grains were prepared by boiling. These could easily have been contaminated with alcohol-producing yeasts, especially if grains were cracked or dehusked in mortars that were also used for the shelling of acorns, since alcohol-producing yeasts appear to be abundant on oak tree surfaces and acorns (Koufopanou et al.2006:1941; Redzepovic et al.2002).

Another exaptation, as suggested by Kislev et al. (2004), was the likely collection off the ground of shed spikelets of wild barley and emmer. This is an unexpectedly efficient and effective way of gathering cereal grains. As the authors note, this technique has the advantage of extending harvests throughout the summer, even into the sprouting season, rather than limiting harvests to the few days when ears were relatively ripe but would not shatter. If sprouted grain was collected off the ground and made into a porridge, it would have been naturally malted, thus providing a sweeter taste. This would have been a key technical step in beer production.

An additional pre-adaptation could have existed if Late Epipaleolithic gatherers were tapping and using palm trees for slightly sugary drinks, as is still done from India to Morocco today. This would have provided knowledge of the fermentation process since palm sap begins fermenting immediately after collection due to natural yeasts (see “Sugary Saps” under “Hypothesis 3” below).

Thus, a number of technological factors and plausible patterns of exploitation (from grain surpluses to oil extraction and use of palm sap) indicate that Late Epipaleolithic communities were technologically primed for the production of alcohol and the brewing of beer. The possible prevalence of alternative resources for brewing may have also provided an exaptation that led to the use of cereals in beer production, as explored in our next hypothesis.

Hypothesis 3: Suitable Ingredients for Alcohol Production

If brewing was an important reason why cereals were used or domesticated, then the major species used should all have been suitable for brewing. Before considering this matter in detail, it is relevant to discuss some other reasonable, noncereal-based candidates for initial alcohol production.

Sugary Saps

As a food source, tree sap is particularly beneficial for its numerous vitamins, minerals, and carbohydrates. Its ready accessibility in spring would have “made it the ideal source of vitamins needed after a long winter” (Vend 1994:302). Tree sap, especially from palms, is also a widely used source of fermentable sugar in traditional alcohol production. Though both its origin and wild progenitor are unknown, the date palm (Phoenix dactylifera) is currently found from North Africa to western India, and the domesticated form is believed to have originated between northeastern Africa and western Iran, including the area of the Fertile Crescent (Damania 1998; Jain et al. 2011; Janick 2005). P. dactylifera grows in the Levant today, particularly “in wet seeps along wadis, around brackish springs, and along the southeast shore of the Dead Sea” (Fall et al. 2002:453).

The wine, or “toddy,” that is produced from these and other palm trees have an alcohol content similar to contemporary beer (2–10%; Steinkraus 1983:315; Iwuoha and Eke 1996:530). In fact, within 2 h of tapping palm sap, fermentation can yield an aromatic wine with up to 4% alcohol content that is sweet and mildly intoxicating. The extent of palm wine consumption varies from location to location, although the standard is roughly 0.5–5.0 l per person per day, with more on weekends and during social events (Steinkraus 1983:316, 324). The importance of palm wines in these communities cannot be emphasized enough; they play a key social role in festivals, ceremonies (both marriage and funeral), and in social interactions such as dispute settlements (Steinkraus 1983:315). Palm wine is also used as a starter leavening agent for breads and cakes in Sri Lanka and Malaysia, while in other areas it is revered for its perceived medicinal qualities as a tonic, sedative, and mild laxative (Steinkraus 1983:316).

To extract the sap, either an unexpanded flower is cut off or the trunk is slit (Steinkraus 1983:317). The sap can be consumed immediately or allowed to ferment. The sap actually begins to ferment as soon as it is collected because it is already inoculated with fermenting agents, including lactic acid bacilli and various types of yeasts (Djien 1982:26; Hardwick et al.1995:81; Jain et al. 2011:716; Steinkraus 1983:321). These agents mean that palm sap fermentation “is always alcoholic-lactic-acetic acid fermentation” (Jain et al. 2011:716). Because of its inherent ability to ferment, palm wine is considered one of the earliest fermented beverages. Traditionally, fermentation is stopped within 24 h and the wine is bottled or drunk before it becomes unpleasant or unacceptable to drink (Steinkraus 1983:321; Iwuoha and Eke 1996:530).


The use of honey in alcohol production seems to be found wherever alcohol is produced. Honey has been considered a prestige item and people have gone to great lengths to acquire it (McGovern 2009:235–237). Evidence from historical sources and archaeological remains indicate that honey was used as an adjunct in beer (and wine) production, both as a sweetening agent and as a means to increase alcohol content (McGovern 2009; McGovern et al.2004:17596; Vend 1994:309; Steinkraus 1983; Hornsey 2003). Honey not only provides fermentable sugars, but is also a source of alcohol producing yeasts. Therefore, honey only requires dilution with water for fermentation to begin (Pederson 1979:211; Steinkraus 1983:305, 306). In temperate climates, the concentration of sugars in honey generally range from 60% and 80%, and the alcohol contents of honey wine, or mead, can range from 6.4% to 20.8% (McGovern et al.2004:17596; Steinkraus 1983:306).

The Ethiopian beverage tej is a good example of a traditional honey-based alcohol that is produced at the household level. Historically, tej was reserved “exclusively for the ruler and his retinue” and it played a role in Ethiopian social stratification well into the early twentieth century (McGovern 2009:234). Contemporary tej consumption is specifically reserved for special social occasions because of the expense associated with acquiring honey (Steinkraus 1983:306, 307).


Even apart from grape-based wines, fruit has an extensive history in alcohol production as a source of fermentable sugars and yeasts, and as a flavoring agent. Most fruit wines tend to be derived from the sweet pulpy mass of the fruit itself. These pulp-based mashes are fermented, most commonly by yeast, and consumed at special events or on occasions of significance. A typical example of such a beverage is Indian jackfruit wine which is consumed primarily at social events and has an alcohol content of roughly 7–8% (Steinkraus 1983:337–338). Similarly, Kenyan urwaga or mwenge is a wine-type beverage produced primarily from bananas, although grains such as millet, sorghum and maize are also commonly added. Ciders are also traditionally produced from lower sugar fruits such as apples or pears and have low alcohol contents, roughly 2–8%.

Thus, knowledge of fermented alcoholic liquids may have preceded intentional brewing by a number of millennia and could have established the basic information necessary for more complex intentional brewing. However, all of the above alcoholic preparations involved minimal manipulation and were heavily dependent on seasonal occurrences of fruits and saps, and therefore must have been limited to certain seasons. Dried fruits may have been stored for longer periods and could have been mashed with water; however, for whatever reasons, few ethnographers refer to brewing with dried fruits. Cereals, however, are easily stored and are typically used for brewing in all seasons, and they were also some of the first domesticates. Therefore, it is pertinent to examine the constraints involved in their use for brewing and the suitability of the individual species.


Of the eight staple cereals (wheat, rye, barley, oats, sorghum, millets, maize, and rice), all can undergo a similar malting process, and, under suitable brewing conditions, can produce beer by alcoholic fermentation with varying degrees of success (Hornsey 2003; see Usansa 2008 and Adebowale et al. 2010 for evidence of rice malting). We focus on rye, wheat (einkorn) and barley as possibilities for early brewing as these are the earliest domesticated grains in the Near East. Since we could not obtain wild forms of these grains, we added husks to domesticated varieties to simulate the wild grains, and we also used an early form of einkorn which was kindly supplied by George Willcox.

Canuel conducted experiments with each of these grains in order to observe the impact that husks and viscosity had on mashing and fermentation. Three different processes were used to saccharify the grains: germination and oven drying, mastication, and industrial malting. As most domesticated rye and wheat lacked husks, rice hulls were added to some of the rye and wheat samples in order to mimic the husks that would have been found on wild species. Based on Canuel’s experience, the optimal volume ratio of grain to husks for brewing rye is around 20–35%, which is about what might be expected to occur from expediently threshed wild rye. Rice hulls are recognized industry alternatives that have no effect on any other part of the brewing process. It was unnecessary to add husks to einkorn brews because many einkorn grains firmly retained their husks even when processed with a mortar and pestle. All grains were mashed with a 3:1 ratio of water to grains. Rye was also tested with a 4:1 ratio due to its higher viscosity. All of the experiments yielded acceptable saccharification and fermentation, with an alcohol yield ranging from 0.5% to 2.6% (see Table 1). The highest alcohol contents were achieved with the control samples, in which industrial malt was used with rice hulls in a 3:1 ratio of water to grain. When compared, the samples with the lowest alcohol content were those that underwent mastication. It should be noted that all the methods of saccharification used in these samples represent viable strategies to produce alcohol, and with improvements in processing techniques they probably could have yielded between 3% and 6% alcohol content, which would have been more in line with conventional brewing results.
Table 1

Results of experimental beer production using rye, barley, and wheat


Alcohol % produced at a 3:1 water to grain ratio

Alcohol produced at a 3:1 water to grain ratio agitated every 2 h

Alcohol produced at a 4:1 water to grain ratio

Malted (in lab)


Awaiting reading



Rye with hulls

Awaiting reading




Failed to malt

Failed to malt

Failed to malt

Wheat (red)








Industry malted (control)





Rye with hulls

















Rye with hulls








Wheat (red)




We are currently awaiting results from professional breweries who are able to successfully provide a alcohol percentage on the highly viscous rye mash

Failed to malt refers to a failure of the organic barley samples to germinate, which due to availability were dehusked during processing

Results of not applicable (n/a) refer to tests which were not conducted, these are primarily restricted to nonrye tests which we felt did not need agitation or a higher ratio for successfully measurements (the tests with a higher water to grain ration are less concentrated and thus will have a reduced production of alcohol; however they do support the hypothesis that even at these non-optimal ratios, alcohol production is viable)


Due to claims for rye as possibly the first domesticated grain (although these are controversial), it is of particular interest to assess the suitability of rye for brewing. In modern alcohol production, domesticated rye (Secale cereale) is sometimes used in whiskey-making, but it is not favored by many beer brewers because it creates a very gelatinized wort and, today, is “naked,” meaning it has no husk. The “gelatinization of starches” is a technical term for the liquefaction process during mashing, whereas the “gelatinization of the wort” is a descriptive term used for a wort that has become jelly-like or gelatinous because of the viscosity of the malted and coarsely ground grains used. Wild rye has a husk, but it can be dehusked relatively easily compared to other wild grains such as wheat (Hillman et al.2001:390). Husks are important during the mashing phase because they reduce gelatinization of the wort and create a natural filter to help separate the grain fragments from the wort (Hornsey 2003:15).

Mashed rye is highly viscous and the wort produced from rye mashes does not fully separate from the grains but tends to become gelatinous. This is primarily the result of arabinoxylans and polysaccharides, including beta-glucans, which are extracted from the grains with the starches during mashing (Balcerek and Pielech-Przybylska 2009; Li et al.2005). The arabinoxylans make up a major part of the starchy endosperms’ cellular wall and cause increased viscosity when in contact with warm or hot water (Li et al. 2005; Lu and Li 2006). Most starch-rich cereals possess β-glucans and other polysaccharides, but rye contains a greater amount combined with a significant lack of enzymes capable of breaking them down (Bamforth 2003:78, 127; Keegstra and Walton 2006; Scheffler and Bamforth 2005). This inhibits the liquefaction of starches during mashing, the conversion of starches to sugars, and the access of yeasts to those sugars (Briggs et al. 2004:30; Scheffler and Bamforth 2005). To reduce these problems, mashes produced from high amounts of rye (15–100%) tend to be made with a higher ratio of water to grain (Hornsey 1999:181–182; Briggs et al. 2004:29–30). In addition to this, agitating the wort would help yeast penetrate deeper and fully attenuate the mixture.

In Canuel’s experiments, the issue of viscosity proved to be a detriment for both the mashing and fermenting of the rye grains. The samples formed a gelatinous mass of suspended grains which did not separate from the wort. The rye grains that were mashed with both a 3:1 and later a 4:1 ratio of water to grain showed some evidence of separation, though they were also very viscous compared to the other grains. After the yeast inoculation, all the rye samples that did not have added rice hulls visibly stopped fermenting much sooner than either the barley or wheat samples. The masticated rye ceased fermenting the earliest, and only achieved approximately 0.8% alcohol content. However, once the samples visibly stopped fermenting, they were agitated. This caused them to begin fermenting again. In general, the samples at the 4:1 ratio yielded lower alcohol percentages than those as 3:1 because the higher water ratio reduced the overall effectiveness of the grains’ diastatic power.

In comparison to the barley and wheat samples, both of which demonstrated a distinct separation between the wort and grain bed, the rye sample was an undrinkable, gelatinous mixture into which neither the enzymes nor the yeast were able to easily penetrate. If used in a fermented state, it may have—with frequent stirring—originally produced an alcoholic paste similar to those made in Indonesia (Steinkraus 1983:381–400).

The experimental rye brews that contained rice hulls fared somewhat better in that the upper two thirds of the grain bed visibly fermented before agitation due to the greater ability of yeasts to penetrate, yielding much higher alcohol concentrations with less agitation required in comparison to the rye wort without husks where only a thin layer at the top of the wort visibly fermented. The hulls also allowed a larger proportion of the grains to become suspended in the gelatinous wort. The CO2 produced during fermentation escaped to the top of the mixture, carrying grains up with it, while the jelly-like nature of the mash served to suspend these grains in the wort creating a thick soup-like mixture. Because the yeast was able to penetrate deeper into the samples with husks, far more grains were carried up into the wort. Thus, the mixture became nearly solid with the water-soaked grains.

Apart from these problems, rye does have some brewing advantages over other grains. Rye in general has a considerable diastatic power, and some varieties of rye can almost rival barley for enzyme activity (Hornsey 2003; Pomeranz et al.1973). Whereas barley gains its higher diastatic power from a number of enzymes, rye’s diastatic power comes primarily from its large amount of α-amylase. This means that barley has a large temperature range in which there is some starch conversion, whereas rye has a more limited temperature range, but will convert a large amount of starch at those temperatures (Pomeranz et al.1973; Lersrutaiyotin et al. 1991:293–294). Rye also tends to produce a higher percentage of malt extract and has higher amounts of protein in its malt, as compared to barley (Pomeranz et al.1973; Lersrutaiyotin et al. 1991:293–294). Rye containing ergot mold might also provide an additional psychotropic effect to any alcohol produced in making beer, and thus may have been perceived as more desirable to use. On the other hand, rye produces a significantly higher pH than barley, which can negatively affect yeast growth and the beer’s taste, but also helps prevent microbial contamination during fermentation (Pomeranz et al.1973).

As is common in modern brewing, rye was often used as an adjunct in historical brewing. Some Sumerian accounts of the brewing processes combined rye with barley, which would have had several advantages over using either grain alone. Both barley and rye have considerable diastatic powers, and their different combinations of enzymes allow a greater range and amount of starch conversion (McGovern 2009; Briggs et al.2004:29–30; R. Hayden 1993; Hornsey 2003). Additionally, the barley would have provided husks to aid in draining the wort and combating the viscosity of the rye.

The possibly domesticated rye that was reported from Natufian levels of Abu Hureyra 1 (see “Hypothesis 4”) still had a number of wild characteristics. Most significantly for brewing, it possessed a husk, although it was easily separated from some of the grain (Hillman et al.2001:390). While the use of rye may have involved problems of viscosity, the rye, when malted with some or all of its husks, would have achieved satisfactory saccharification and fermentation. Agitation likely would have also helped combat the effects of a viscous wort. Considering the gelatinous mass that Canuel observed in his experimental rye samples, early rye brews may have produced a far different beer than what we would recognize today. These rye beers may have been more akin to an alcoholic, nutrient-rich thin gruel or even paste, depending on how much water or husk was used in the brewing process.

What emerges clearly from the experiments with brewing rye is the critical importance of retaining a large proportion of husks for brewing. Cursorily threshed and ground wild rye may have been an initially optimal grain for brewing in this respect.

Wheat (Einkorn)

Although domesticated wheat (in particular, Triticum aestivum or common wheat) is the second most popular grain in modern Western brewing, it has some problematic features for contemporary brewers (Briggs et al. 2004:11). Like rye, most modern domesticated wheats are free threshing and naked grained, so they do not naturally filter the grains from the wort during mashing. The lack of a husk is also problematic for the malting process because the sprouting embryo is unprotected and therefore very delicate. It can easily be detached from the grain when being moved, and its loss would hamper the malting process and inhibit later mashing and fermentation (Hornsey 2003:15). However, these would likely not have been issues for the early brewers because wild forms of wheat have tough husks that are difficult to separate from the grain, as do domesticated einkorn and early emmer (Hillman et al.2001:390; Hornsey 2003:15; Zohary and Hopf 2000:32–34).

In our own experiments, we found that dehusking einkorn was very difficult using a traditional mortar and pestle. Only about 20–30% of the grains separated from husks after 15 min of pounding 100 g. of threshed ears. This corresponds well with other experiments by Meurers-Balke and Lüning (1992). We used einkorn in this poorly dehusked state for brewing and most of the grains germinated well for malting.

Wheat does have one particular advantage over most other grains in that a relatively low temperature (52–64°C) is needed for the liquefaction of wheat starch during mashing. Of the eight primary grains, only wheat, oats, barley, and millet have a starting liquefaction temperature under 60°C, and of those four, wheat has the lowest (Hoover et al. 2003; Stewart 1995:130). This means that when using wheat for brewing, brewers would have a larger range of temperatures to work with. This would require less precision in maintaining heat, although the temperature would need to remain under 70°C, which is the point at which all active enzymes in the diastase are denatured.

This range in temperature might have facilitated the use of a single container for mashing and fermenting. The mash could slowly be brought up to the appropriate temperature through the addition of fire-heated rocks. The single container, if made of ground stone or wood, would provide a stable and insulated heating environment for the full range of enzyme activities in any season. Such a method would optimize liquefaction and starch breakdown, and potentially fermentation, and would have been relatively easy for ancient brewers to use (Hornsey 2003:15).

Though it provides the least saccharification when compared to either rye or barley, modern naked wheat malt (T. aestivum) has sufficient diastatic power to fully saccharify its own starches (Pomeranz et al.1973; Lersrutaiyotin et al. 1991:293–294). Canuel’s experiments with 100% wheat, both modern naked wheat and einkorn, have demonstrated that although wheat has viscous characteristics, it does not create a gelatinous wort to the same extent that rye does. After mashing, the grain bed settles and there is clear separation between the grain and the wort. As CO2 escapes from the grain bed, it carries grains up into the wort and the viscous nature of the wort causes some suspension of these grains. However, these grains eventually fall back down into the grain bed, as opposed to the rye grains, which remained suspended in the wort for the duration of the fermentation. Additionally, the wheat samples did not need to be agitated to induce continued fermentation as the rye samples did and Canuel concluded that the natural husks in the partly dehusked einkorn provided some degree of yeast movement throughout the grain bed.

Wheat’s high viscosity is a problem for modern brewing, which relies on filtration or a runoff stage after mashing. In a more traditional setting, however, the wort could be readily removed by scooping, pouring, or siphoning by straw. Traditional wheat beer shares many of the standard characteristics of a modern beer and tends to have an alcohol content ranging from 1% to 4%. The wheat beer produced from Canuel’s experiments possessed a distinct wheaty taste and aroma, even including those soured by lactobacillus. The einkorn samples, which were threshed and malted by Canuel in the lab, saccharified and fermented extremely well. Their alcohol production was almost on par with the naked, industrially malted wheat grains that were mashed with rice hulls (see attached Table 1).


In contemporary brewing, domesticated malted barley (Hordeum vulgare) is by far the most common grain used. It has several inherent advantages over other grains, and as such it is often the standard for comparison when discussing brewing. It grows in either a two- or six-row grouping, of which the two-row is remarkably drought resistant, second only to emmer wheat (Briggs et al.2004:11; Riehl 2009:100). Early forms had tough husks, making them difficult to thresh (Hillman et al.2001:390). These tough husks would have made barley much more suitable for brewing than breadmaking although most early breads may have included substantial amounts of ground up husks. However, the dehusked grains in mortars may have been preferentially separated and used for making bread (especially since they may have been too damaged to germinate) while the remaining husked grains could have been used for brewing.

One of the reasons that domesticated barley has become one of today’s most desired brewing grain is because it contains or produces all the enzymes necessary for optimal saccharification of its own starches as well as those of any added adjuncts (Linko et al.1998:85; Pomeranz et al.1973; Lersrutaiyotin et al. 1991:293–294). During malting, barley produces “a sufficient spectrum of enzymes to reduce the endosperm substance entirely to its basic building blocks—e.g., primarily sugars and amino acids” along the entirety of the grain (Lewis and Bamforth 2006:82; Linko et al.1998:86–87). Barley is also a source of other nonsugar components, including vitamins and minerals which ensure that there is an abundance of nutrients along with sugars for the yeast during fermentation (Lewis and Bamforth 2006:80). However, analysis of wild barley has indicated that it generally has much higher levels of pentosans and beta-glucan than domesticated varieties, and that these factors, together with the smaller wild seeds and greater cell wall proportions, suggest very low starch contents of the grains as well as low levels of malt extract from wild forms (Henry and Brown 1987). This may explain the initial emphasis on other grains in the domestication (and possible brewing) process.

However, once domesticated, the use of barley in brewing would have had additional benefits. Domesticated barley has a natural husk like its wild progenitor, while domesticated rye and many domesticated wheats have lost theirs. The natural husk helps limit the effects of viscosity caused by barley’s arabinoxylans and polysaccharides during saccharification (Scheffler and Bamforth 2005; Li et al.2005; Bamforth 2003:78–79, 107). Barley’s viscous characteristics are also tempered by several enzymes, such as β-glucanases, which help to break down the arabinoxylans and polysaccharides in the water (Scheffler and Bamforth 2005; Li et al. 2005; Bamforth 2003:78–79, 107). Canuel’s experimental brewing with 100% domesticated barley demonstrated its advantages for both the mashing and fermenting stages. The mash remained consistently fluid and did not suffer the viscous problems of rye and wheat. All the barley samples, except those saccharified by mastication, had all their starches fully converted to sugars in a comparatively short period of time without need for agitation. Similarly, during fermentation, the yeast was able to fully access and consume the sugars without requiring stirring or agitation.

The visible evidence of fermentation, the ascent of grains into the wort as the CO2 escaped the grain bed, was particularly pronounced in the barley samples compared to the rye and wheat samples. The movement of grains was enough to effectively self-agitate the grain bed providing more complete access for the yeast to all available nutrients and sugars. Whereas the entire grain bed in the barley samples was subjected to visible fermentation, only the upper two-thirds of the unstirred wheat grain bed and upper quarter of the rye bed demonstrated upward migration of grain from CO2 production. Additionally, the viscosity of the barley samples was low enough that the ejected grains immediately settled back onto the grain bed, as compared to the wheat samples, in which ejected grains took approximately an hour to resettle, and rye samples, in which the grains remained suspended in the wort for several days.

Barley is not without problematic features though. Fungal organisms such as Fusarium are known to contaminate the barley malt, which “may lead to formation of deoxynivalenol (DON) and other mycotoxins,” which are not destroyed during the brewing process (Linko et al.1998:88). However, the addition of Lactobacillus plantarum and Pediococcus pentosaceus strains of lactic acid bacteria to barley during the steeping stage can actually inhibit the growth of Fusaria while not adversely affecting the brewing process (Linko et al.1998:88).

Choice of Grains

There are a number of factors that would have been important to Natufian people in their choice of grains for brewing purposes. Local availability, performance in alcohol production, taste, toxic contaminating organisms, and technical problems would all have been basic concerns. Some of these criteria are difficult to evaluate from a contemporary perspective, especially since early brewers undoubtedly had a very different approach to choosing grains than that of modern brewers. For example, it is likely they would not have had the concerns of modern brewers regarding the lack of husks on the grains, presence of hazes in the beer, or separation of the wort from the mash. The earliest domesticated grains would still have had husks like their wild progenitors. As Meurers-Balke and Lüning (1992; see also Zohary and Hopf 2000:32–34) have determined, the time and energy involved to completely remove husks from einkorn and emmer, in particular, seems excessive in relation to the product obtained.

However, complete removal of husks may not have been critical for all uses of these grains. The retention of husks would have been completely unacceptable only for the preparation of gruels, where the spike-like husks would have remained intact and rendered the gruel inedible (Harlan 1967; Wright 1994:242). However, for breads, a certain amount of husks could certainly have been ground up on milling stones with the grain and become “fiber” in the coarse, but easily consumed, flour. Similarly, with brewing, a relatively high proportion of husked grains would have been sprouted, malted, and then ground up, which would have resulted in only beneficial effects on the brewing process. Thus, the time spent pounding grain in mortars could have been minimized by separating the most easily dehusked grains (with a sieve or winnowing tray) for use in gruels, and using the remainder of the resistant husked grains for brewing or grinding up into coarse flours. This triple strategy for the use of dehusking products would make optimally efficient use of all harvested grains.

As with most ethnographic tribal brewing, the runoff stage in which husks would have been particularly useful probably was not part of an early brewing process, especially if the same container was used for both mashing and fermenting. It is likely that difficulties with high viscosity in rye and wheat could have been easily dealt with by adding more water and manually agitating or stirring the grain bed. Similarly, the problem of grains suspended in the wort after fermentation could have been dealt with by straining the wort through a basket, by extracting through the bottom of the fermentation vessel (e.g., via a reed in the bottom of a stoppered stone pipe mortar) or by simply consuming them with the fermented liquid as often occurs with chicha.

Over time, early brewers likely experimented with different types of cereals alone, together, and with adjuncts. Cereals which were more suited for brewing, which yielded a product with appropriate alcohol content and taste, and which required minimal supervision, were likely utilized over other grains. Considering that rye is more easily dehusked, at least in part, and may have produced more notable effects if rye with ergot was used, it may have been a first choice for beer making among some groups. However, since it is prone to problems and requires attention during the brewing process, it may have been abandoned in favor of wheat or barley after an initial stage of brewing experimentation.

Thus, we find that all three cereal taxa that were first domesticated had different advantages and disadvantages, but all would have provided more than satisfactory materials for brewing relatively low alcohol beers or pastes similar to many traditional beers in village societies around the world (Hornsey 2003:8, 21, 25).

Hypothesis 4: The Role of Cereals in Natufian and Early Neolithic Subsistence

According to the feasting model proposed for initial brewing (discussed in “Hypothesis 5”), and on the basis of an almost universal pattern of alcohol consumption taking place only on special occasions in traditional societies, we expect to find several specific patterns of cereal use and procurement in the Natufian related to brewing. Thus, if cereals were used in brewing (and were therefore highly desired but entailed high labor costs), cereals should (a) have formed anywhere from 5% to 20% (as beer or bread) of the total diet of early brewing groups—an estimate based on ethnographic examples; (b) cereals should have been the objects of special procurement efforts in areas where they were scarce or absent; (c) domesticated cereals should have made up a small percentage of the total cereals consumed and should have remained at moderate to low frequency levels, at least until genetic improvements made cultivated forms more efficient to produce than harvesting wild cereals.

To elaborate on our first expectation, there are some ethnographic cases where small and large feasts rotate between households so frequently that beer consumption alone is estimated to provide about one fifth to one third or more of all the calories consumed in certain communities (Arthur 2003:517; Barth 1967:161; Haggblade and Holzapfel 1989:281; Dietler 2001:81). Generally, however, we expect feasts to be episodic and provide limited contributions to overall subsistence. This would presumably have been especially true under conditions of initial feasting and cultivation, in which high frequencies of feasting and beer consumption (more than 20% of total calories) seem unlikely, while 5% seems like a minimal amount for feasts to have had a significant impact. In contrast to the assumption that cereals, especially in the form of beer, were a high cost food, Harlan (1967) and others have argued that wild cereals were a major subsistence resource that could yield enough in 1 h of gathering to produce a kilogram of clean grain (after processing), resulting in up to 2,700 kcal for an hour spent harvesting. However, Harlan never took into account processing times which are substantial (see Barlow and Heck 2002:131–132). If cereals were so easily obtained and prepared, they could be expected to represent very high proportions of the paleobotanical record in Late Epipaleolithic sites.

Using ethnographic and experimental data, some researchers have questioned the high returns per hour in cereal collecting reported by Harlan. For instance, Ladizinsky (1975:265–266) collected wild wheat from stands that were “perhaps the most abundant in its entire distribution area.” Yet, yields averaged only 384 g/h., far below Harlan’s reported yields of 2 kg/h of harvest time (1 kg after dehusking and winnowing), while yields for wild barley and oats were almost half the wheat yields. Other researchers have emphasized the labor-intensive aspects of processing cereals, especially dehusking the grains, resulting in net returns as low as 500 kcal return per hour after harvesting and processing. It took experimenters working with traditional grinding tools between 0.63 and 1.54 h to dehusk 1 kg of grain (Foxhall and Forbes 1982:77; Meurers-Balke and Lüning 1992:356–357; Wright 1994:243–246). Barlow and Heck (2002:133–134) examined the labor involved in producing flour, and found that it would take 7.44 h of processing to produce a single kilogram of coarse flour with considerable chaff (although Harlan (1967) reported 46% of his wild harvest as “clean grain”). High processing costs appear to account for the unusually low calorie return rates for most grass seeds and their low ranking in diet choices (Kelly 1995:81–82; Gremillion 2004). In fact, ethnographers found that the indigenous Australians would generally only resort to gathering and processing seeds when their long storage qualities or low rates of return were required, such as in times of population or environmental stress (Wright 1994:244).

On the other hand, the high-expense characteristics of some grains and flours are exactly what could have made them attractive for feasting contexts in which guests would be offered desirable foods that were difficult or expensive to prepare. That consumption of grains would have been particularly relished among such groups has been argued by Wadley and Martin (1993) who document the mild euphoric effects of cereal consumption due to the presence of opioid-like exorphins in cereal grains, not to mention the euphoric effects of beer or even freshly baked breads. Despite such desirable taste or effects, it seems unlikely (as the Hadza have attested; Woodburn 1966) that the extra effort required to cultivate and process grains for daily foods would have been worth the expenditure.

The importance of grains in the Near East, as presumably represented by grinding stones, appears to increase in the Early Natufian. This occurs at a time when a number of researchers view the subsistence economy as being optimal (see Maher et al.2011:5–6, 9, 16; Wright 1994:252), resulting in increased sedentism (or at least regular visiting of locations with permanent stone architecture; Hayden 2004; Boyd 2006). This is not consistent with optimal foraging expectations in which cereals were used only in times of dire need as proposed by Barlow and Heck and others.

In order to provide some idea of the potential role of cereals in daily subsistence vs. episodic feasting, it is necessary to keep in mind that a number of biasing factors likely inflate cereal remains and under-represent nut and geophyte remains archaeologically. Barlow and Heck (2002:139–140) have documented such biases arising from preparation techniques and transport cost considerations. Similarly, Hillman (2000) has argued that many plants not represented in the paleobotanical assemblage of Abu Hureyra were probably important staples. Unfortunately, there are few quantified estimates of cereals and overall subsistence remains. But we will discuss what data are currently available.

The History of Cereal Use

Cereals do not appear to have formed significant portions of the plant remains until Ohalo II (23,500–22,500 cal BP), where cereals made up approximately 27% of the total edible plants found at this seasonal camp, though they were greatly outnumbered by small-grained grasses (Kislev et al.1992; Nadel et al.2006; Piperno et al. 2004; Rosenberg et al.1998:656; Weiss et al. 2004:9552). Stone mortars and pestles (unlikely to have been used for dehusking grains) dominated the Early Natufian assemblages, especially in woodland zones, whereas grinding stones, although present in the Early Natufian, became more common in Late Natufian sites and were more likely to have been used for producing flour (or ground/crushed malted grains) from cereals (Wright 1994:254–255).

It was not until the Late Natufian period (12,800–11,500 cal BP; Kuijt 2009) that archaeologists found the earliest evidence of possible cereal domestication in the Levant although this is now disputed (Colledge and Conolly 2010; Ozkan et al.2010; Stordeur et al.2010; see Table 2). The excavation of Abu Hureyra 1 (13,250–12,750 cal BP; Willcox et al.2009) recovered 45 cereal grains from Natufian levels which displayed some domesticated characteristics (Hillman 2000:348, 379, 527–528). Of the 12 grains selected for AMS dating, only three rye grains were dated to the occupation of Abu Hureyra 1, the earliest of which dated to 11,140 ± 100 14 C yr BP uncalibrated, whereas the other nine were contaminants from later periods (Hillman 2000:379). The domesticated cereal grains appeared alongside a large array of wild plants, including wild wheats and rye, and the remaining 36 possibly domesticated Natufian period grains made up less than 6% of the cereal assemblage (totalling 611 grains) (Garrard 1999; Hillman 2000; Weiss et al.2004:9553). The total numbers of rye, barley, and wheat grains from Abu Hureyra 1 make up only 10.95% of selected identified edible plants recovered (Willcox et al.2009:153).
Table 2

Absolute and proportional counts of cereals in selected Near Eastern epipaleolithic and pre-pottery neolithic sites


Dates of sites

Total number (selected) edible plant remains

Total number of seeds

Total number SGG and cereals

Total number of cereals

Total number of SGG

Total number of legumes

Total number of nuts

Cereal % of total (selected) edible plants

Cereal % of total SGG and cereal

Cereal % of total seed remains

Cereal % of total volume of SGG and cereal

Total number of domesticated cereal grains

Domesticated cereals% of total cereals

Domesticated cereals % of total edible plants

Ohalo II

23,500–22,500 cal BP








ca. 27c




0 (Mutants)

(12) Mutants


Late Natufian

12,800–11,500 cal BP


Abu Hureyra I

13,250–12,750 cal BP


over 14,000b












Iraq ed-Dubb

9300 cal BC









0 (PPNA contaminants)



Mureybit I

12,500–12,000 cal BP










Mureybit II

12,000–11,500 cal BP













Pre-Pottery Neolithic A

ca. 11,700–10,500 cal BP


Netiv Hagdud

10,000–9400 uncal BP















Iraq ed-Dubb

13,500–10,500 cal BP















Mureybit III

11500–11200 cal BP









99 or 74






Pre-Pottery Neolithic B

ca. 10,500–8,250 cal BP


Aswad I

8500–8200 cal BC















Nevalı Çori

8500–8200 cal BC











Aswad II

9800–9300 uncal BP












There are some number discrepancies between two separate sources or between our calculations and another source—we used bolded numbers if needed for calculations

Ohalo II data from Weiss et al.2004

Abu Hureyra data from Hillman 2000: 348, 379

Iraq ed-Dubb data from Colledge 2001: 143, 153

Mureybt data from van Zeist and Bakker-Heeres 1984: 176–179, 186

Aswad data from van Zeist and Bakker-Heers 1982: 185


aWillcox et al. 2009: 153

bSavard et al. 2006: 181

cRosenberg et al. 1998: 656

dWeiss et al. 2004: 9553

At Mureybit, the amounts of wild cereal remains changed drastically throughout the occupation. Excavations of Mureybit 1 (12,500–12,000 cal BP; Willcox et al.2009) recovered a total of 3.5 barley grains and 12 einkorn grains, while 2.5 barley grains and eight einkorn grains were recovered from Mureybit 2 (12,000–11,500 cal BP; van Zeist and Bakker-Heeres 1984:176–177). Out of a total number of selected plants identified as edible found at Mureybit, wild barley and wild einkorn made up only 4% in the first phase of occupation and less than 1% in the second phase (Willcox et al.2009:153). In contrast, the PPNA levels of Mureybit 3 (11,500–11,200 cal BP; Willcox et al.2009) contained 173.5 grains of barley and 2,690 grains of einkorn (van Zeist and Bakker-Heeres 1984:178–179). Out of the same selection of edible plants, barley and einkorn made up 83% of the total in this occupation phase (Willcox et al.2009:153).

In addition to the wild grains, a single grain and a spikelet fork, both of domesticated emmer, were found at Mureybit 2, and a single grain each of domesticated barley and domesticated durum wheat or common wheat were found at Mureybit 3 (van Zeist and Bakker-Heeres 1984:176, 178, 186). However, weed remains have been found throughout the occupation phases at Mureybit (van Zeist and Bakker-Heeres 1984:198), suggesting that cultivation may have been an early and steady practice at the site (Hillman 2000:378).

Several PPNA sites (ca. 11,800–10,500 cal BP; Maher et al.2011:16) have offered evidence of cereal domestication although these identifications, too, are now disputed with a concensus emerging for cultivation being practiced in many PPNA sites but without any morphological or genetic changes typical of domesticated plants. In addition, some PPNA occupations may actually begin in the Younger Dryas (Colledge and Conolly 2010; Ozkan et al.2010; Weiss et al.2006; Stordeur et al.2010; Willcox 2011; Tanno and Willcox 2011). PPNA-dated layers of Jericho contained “but a few” grains reported as domesticated emmer, einkorn and barley (Nesbitt 2002:121; Hopf 1983). Iraq ed-Dubb (13,500–10,500 cal BP) contained seven spikelet forks which may have been either domesticated emmer or einkorn, as well as some barley grains reported as domesticated (Colledge 2001:143; Kuijt 2004:296; Nesbitt 2002:121). Two of these appeared in Natufian levels, but are believed to be contaminants from PPNA levels (Colledge 2001; Nesbitt 2002). Out of 183 cereal remains from Late Natufian and PPNA levels, 52 emmer grains and six barley grains were identified as domesticated, and another 50 barley grains could not be identified as either domesticated or wild (Colledge 2001:143, 153). Even without factoring in the 484 unidentifiable plant remains, the total cereals make up less than 15% of the total edible plant remains, and the possible domesticated cereals make up under 5% of the total edible plants recovered from Iraq ed-Dubb. In addition to the minimal representation of cereals at Abu Hureyra 1 and the first two occupation phases of Mureybit, the PPNA sites of M’lefaat, Qermez Dere, and Netiv Hagdud revealed that wild cereals only made up “between 11% and 16% of the seed remains” (Savard et al.2006:189); and as Hillman (2000) and Barlow and Heck (2002) note, other plants likely to have been used for food (such as roots and acorns) probably do not appear in the botanical remains at residential sites, while cereals are undoubtedly over-represented. Moreover, relative to wild cereal remains, the small amount of domesticated cereal remains claimed for Natufian and PPNA levels of sites in the Levant suggest that the domesticated cereals did not make a large contribution to the overall subsistence strategy during this period or for several millennia (Özkan et al.2010; Tanno and Willcox 2011).

In contrast to the PPNA, much more widely accepted evidence for domestication occurs in the PPNB (ca. 10,500–8,250 cal BP; Kuijt 2008). Phase Ia levels of the PPNB dated site of Aswad (8,700–7,500 cal BC; Stordeur 2003; Stordeur et al.2010) contained 19 grains and 70 spikelet forks identified as domesticated emmer (Triticum dicoccum) as well as 30 grains and 17 internodes of possibly domesticated barley (Hordeum spontaneum/distichum; Nesbitt 2002:121; van Zeist and Bakker-Heeres 1982). AMS dating placed the oldest emmer grain at 9300 ± 60 14 C yr BP (uncalibrated; Willcox 1995; personal communication). The barley grains reported as domesticated made up only 26% of the entire barley grain assemblage discovered from all phases of occupation, and most of the domesticated grains date from the second occupation phase (Nesbitt 2002:121; van Zeist and Bakker-Heeres 1982:204). Out of the 1,989 seed remains recovered from phase I, there were 23 grains reported as domesticated emmer, 32 grains of possibly domesticated barley, and 281 unidentified cereal grain fragments (van Zeist and Bakker-Heeres 1982:185). Cereals made up under 17% of the total seed assemblage, and identified domesticates made up less than 3%. Furthermore, only domesticated emmer was found at this site; the striking absence of wild emmer suggests that it was domesticated before being introduced to the site (van Zeist and Bakker-Heeres 1982:186).

Limited Subsistence Contribution of First Cereal Domesticates

Although stands of wild cereals may have occurred in abundance near some sites and been used in some northern Natufian sites as staple foods per Harlan’s suggestions, at other sites wild cereals appear to have been scarce or locally absent and, as noted in the last section, do not appear to have been staples. As suggested by Peter Rowley-Conwy (2001:58–65), the first domesticates would not have been the staple foods of pre-agrarian societies.

The expectation that people who did not live near abundant wild cereals would go to unusual lengths to acquire them for brewing or other special purposes like breadmaking seems to be born out in the cases of Mureybet and Abu Hureyra, where it is estimated that the nearest wild cereals were located as much as 60–100 km away from these settlements during the Natufian period (Willcox 2007:27, 32; Hillman et al.2001:389). If the wheat grains found at Iraq ed-Dubb are einkorn, they would be further evidence of cultivation at this site as it is located far from the wild einkorn distribution areas (Nesbitt 2002:121). The domesticated emmer recovered from Aswad, combined with the lack of wild emmer, suggests it was brought in for specific reasons. Similarly, the wild einkorn found at Aswad is thought to have likely come from western Turkey, and was introduced to Aswad II alongside domesticated einkorn; however, its minimal amounts in relation to other cereals found in Aswad II imply that “it was not a major crop” (van Zeist and Bakker-Heeres 1982:191). Additionally, the site’s average rainfall would not have been able to support dry farming of crops, and irrigation or cultivation in specifically marshy areas would have been needed to ensure a successful harvest (Stordeur et al.2010). The effort required to cultivate cereals at Aswad suggest that these crops were of unusual value and significance.

Thus, except for a few possible seasonal instances of intensive exploitation when grains may have been intensively exploited for short periods (e.g., Ohalo II), the expectations that cereals constituted a minor element in Late Epipaleolithic diets seems generally confirmed, especially considering the likely taphonomic biases favoring cereal remains and minimizing nut or geophyte remains. Cultivated sources of cereals, as indicated by domesticated varieties, were of minimal importance until well into the PPNB (Tanno and Willcox 2011). Moreover, it appears that where cereals were not locally available, unusual efforts were expended by groups relying predominantly on hunting and gathering to obtain them from distant sources or to expend considerable efforts to cultivate them mainly in predomesticated forms. These patterns did not change for at least a thousand years or more. Thus, all of the expectations concerning the brewing hypothesis for the role of early cereals can be viewed as warranted.

Hypothesis 5: Suitable Social Contexts

If brewing is universally, if not exclusively, associated with traditional feasting contexts in traditional societies, then uniformitarian principles lead to the expectation that if there was brewing in the Late Epipaleolithic, there should also have been feasting as well as a sufficient degree of social complexity to support feasting (i.e., minimally, a complex hunter/gatherer socioeconomic organization). We explicitly reject the assertions by some authors that this entailed social stratification, “institutionalized inequalities,” settlement hierarchies, or chiefdom levels of organization (e.g., Smith 2001; Winterhalder and Kennett 2006; several anonymous reviewers). Rather, we have steadfastly viewed competitive feasting and brewing as developing in transegalitarian societies (which some authors sometimes confound with “egalitarian” societies; Wiessner 2002; see also the article “comment” by Hayden).

We have determined that the technological and technical prerequisites of brewing were well established during Natufian times. However, brewing beer is a laborious and time-consuming process that requires surplus amounts of cereals and control over significant labor (Dietler 1990; Arthur 2003; Jennings et al.2005). It is not something which is undertaken by families of meager means nor by individuals for frivolous purposes such as ephemeral personal whims or pleasures. The ethnographic literature makes it very clear that brewing beer is done by those with surpluses almost exclusively for special occasions that are socially significant. It is for this reason that brewing is an essential constituent of feasts in most areas of the traditional world. These labor, surplus, and feasting correlates of alcohol production are probably additional important reasons why brewing is absent from simple hunter/gatherers who lack surpluses, private resource ownership, storage, and control over labor. When, then, does the first evidence for socially significant contexts like feasts (together with their associated correlates of surpluses, private ownership, and control over labor) first appear in the Near East? The very first evidence for feasting in the region occurs in the Late Epipaleolithic. There is little if any indication of feasting prior to this time. Hayden (2004, 2011a) has documented this evidence in detail elsewhere but we will briefly summarize the main points here.

As noted previously in our discussions of pre-adaptations, there are a number of indications that food surpluses existed: increased sedentism, possible storage pits, and other storage structures, the documented rich environments around Abu Hureyra (see Hillman 2000:366, 370–371, 384; Moore et al.2000:480), increased populations, manufacture and importation of prestige objects, and the breeding of dogs. Surpluses constitute the prerequisites not only for brewing, but also for holding feasts. In the Natufian, hearths over a meter in diameter (up to 7 m in the case of Rosh Horesha and perhaps El Wad; Goring-Morris and Belfer-Cohen 2009; Weinstein-Evron 2009) have been documented at Mureybet (Cauvin 1991), El Wad (Garrod and Bate 1937), Nahel Oren (Stekelis and Yizraeli 1963), Beidha (Byrd 1989:78, 103), Shanidar Cave (Solecki et al.2004:28, 105–106, 120), and to lesser extents at other sites. Such hearths are often associated with burials and interpreted as having been used for funerary feasts. In addition, specialty foods and large quantities of faunal remains occur at a number of sites. For instance, at Hilazon Tachtit, remains of over 50 tortoises plus joints of boar and aurochs were associated with a special Late Natufian burial (Munro and Grosman 2010). Other excavators remarked on the exceptional quantities of animal bones associated with large hearths or burial features such as at Beidha (Byrd 1989), Wadi Hammeh (Webb and Edwards 2002:109), Mallaha (Perrot 1960:258)—and in related cultures at Hallan Çemi (where faunal remains were explicitly linked to feasting; Rosenberg and Redding 2000:44, 46, 49, 52), Zawi Chemi Shanidar with its 15 caprid skulls associated with the unique structure at the site (Solecki 1980:53–54), and Shanidar cave where hearths were explicitly interpreted as the remains of funeral feasts (Solecki et al.2004:28, 105–106, 120).

Prestige serving and food preparation vessels are also well attested to in the Late Epipaleolithic. Very standardized stone cups or bowls (Fig. 3) often made of basalt imported from 60 to 100 or more kilometers occur over a wide area in a number of Natufian, other Late Epipaleolithic, and PPNA sites of the fertile Crescent, including: Abu Hureyra (Moore et al.2000:174), Mallaha (Perrot 1966:476) Jerf el Ahmar and Gobekli Tepe (Stordeur and Abbès 2002:584), Körtik Tepe (Ozkaya and Coskun 2009), Mureybet (Cauvin 1991:304), and Hallan Çemi (Rosenberg and Redding 2000). Like some of the finely made basalt mortars in the “core” Natufian sites, bowls and cups are often decorated with incised designs. Such bowls are labor intensive to make, especially compared to bowls of wood or bark, and therefore make little sense as practical technological items, but they can be easily understood as prestige items meant to impress others at food events like feasts (see Klein 1997), although few of the excavators in the Near East explicitly interpret them in this fashion.
Fig. 3

Examples of decorated stone bowls from Epipaleolithic sites. a Mallaha (Perrot 1966:476: Copyright 1966 Elsevier Masson SAS. All rights reserved); b Abu Hureyra (Moore et al.2000:174); c Hallan Çemi (Rosenberg and Redding 2000). This tradition continues in PPNA sites such as: d Jerf el Ahmar (Stordeur and Abbès 2002:584); e Körtik Tepe (kind permission from Vecihi Ozkaya and Aytaç Coskun); and f Göbekli Tepe (Beile-Bohn et al. 1998: Fig. 26). Of note are the very standardized range of sizes (c. 9–12 cm diameters and 8–9 cm heights); the uniform flat bottomed and curved side shapes; similar design styles; and the widespread occurrence over space and time. We think that these bowls are prime candidates for prestige serving vessels, especially for individual servings of liquids, essentially the same size as a modern large tea cup or small soup bowl. The best candidates for liquids that could have been served are brewed beverages, emulsified nut oils, or soups

Even more impressive are the massive (up to 100 kg.) deep mortars found in the Natufian. It is astonishing how such deep mortars could have been manufactured—or at least how much time it must have taken—using stone tools, much less why they would have been manufactured given the great amounts of time needed to quarry, fabricate, and transport (up to 80 km or more) these items. The unusually deep and narrow inside cavities of some types of “mortars” seem to have no practical purpose in terms of processing grains or nuts, while stone tends to crush grains much more than wooden counterparts and makes it difficult to separate the grain from the chaff in flour production (Wright 1994:243). Stone is, of course, also much more difficult to work than wood. However, as noted earlier, the deeper narrow varieties could have minimized the loss of carbon dioxide, essential for maintaining anaerobic conditions during fermentation. Thus, there are several reasons for viewing at least some of these mortars (often associated with graves) as unusual and high-cost prestige items, possibly only used in feasting contexts and possibly for brewing.

One of the more common features of feasting among complex hunter/gatherers is that items of wealth (prestige objects) are displayed and often given to guests in the contexts of feasts (see Hayden 1998, 2004). Such items can even be destroyed in intensely competitive feasts and this may be why the large mortars associated with some Natufian burials were intentionally broken or breached. A wide array of other prestige objects regularly occur in the richest Late Epipaleolithic centers of the Near East, including imported dentalium shells used by the hundreds on burial garments (especially at El Wad; Garrod and Bate 1937), stone beads buried by the thousands with some individuals (Solecki et al.2004), the imported basalt bowls and mortars, decorated or grooved pebbles, imported obsidian, pierced canine teeth or bone pendants, shark teeth, bone sculptures, fox and leopard phalanges (indicative of pelt garments), raptor talons and wing bones (presumably used in costumes), domesticated dogs, jade, copper, malachite, and plastered or flagstone architectural features (for details, see Hayden 2004, 2011a). The presence of so many prestige items provides support for the idea that feasting was practiced at least in some of the major centers during the Late Epipaleolithic.

Feasting often takes place in special locations, and these are also represented in the Late Epipaleolithic at, or adjacent to, special community buildings (e.g., at Hallan Çemi, Ain Mallaha, Rosh Horesha), and at burial sites which are often located in caves such as Hilazon Tachtit, Shanidar Cave, and El Wad.

All of these indications of feasting (large hearths, surpluses, dense faunal remains, high cost serving and processing vessels, prestige items, special locations) conform to what we know ethnographically about other complex hunter/gatherers such as those in California, the North American Northwest Interior, and the Northwest Coast who characteristically engaged in feasting utilizing shells as feasting gifts for social and political purposes as well as sometimes serving valued foods in prestigiously carved bowls (Hayden 2001). While there may still be reluctance in some circles to use ethnographic parallels to interpret Paleolithic or Epipaleolithic contexts, the Natufian is often referred to as “the best known example of a complex hunter-gatherer society in southwest Asia” (Finlayson and Warren 2010:89). We are persuaded that the case is strong for viewing the major Late Epipaleolithic communities of the Near East as complex hunter/gatherers similar in fundamental ways to ethnographic groups in California and the Northwest Interior. To iterate, these similarities included: sometimes large community sizes, high population densities, seasonal or more pronounced degrees of sedentism, exploitation of rich resources, private ownership, wealth items, storage practices, burial practices, and socioeconomic differentiation at the transegalitarian (not chiefdom or stratified) level (Hayden 2004, 2011a). As Rosenberg et al. (1998:653) note, the large sedentary Natufian communities are associated with “the development of corporate groups, the development of exchange systems based on principles other than generalized reciprocity, non-kinship-based status distinctions, and the development of more complex organizational systems than typically occur among mobile hunter-gatherers.” Kelly (1983:292; 1991; 1992:46, 58) concurs that increased sedentism implies a relatively rich resource base and storage, which would have loosened the cultural constraints on self-interested pursuits, allowed aggrandizing individuals to obtain benefits for themselves, and eventually develop inequalities. All these effects are hallmarks of transegalitarian complex hunter/gatherers. Sharing networks could be expected to have become restricted, and economically based strategies for reproduction and survival would have become predominant, especially the use of surpluses in feasts (Hayden 2009; Kelley 1992:58). Thus, it appears that in the Late Epipaleolithic, there were well-established social contexts and motivations for producing surplus-demanding, labor-intensive beverages like nut oils and brewed liquids.

Summary of Hypotheses

While many of the indications that we have for brewing in the Natufian are circumstantial, the expectations established by the brewing hypothesis for the archaeological record have been substantiated. As would be characteristic of a luxury food, the overall role of cereals (where they have been found in the northern Natufian and PPNA sites) was a relatively minor one, especially for cultivated (domesticated) cereals; special efforts were apparently made at some sites to procure cereals from unusual distances when necessary, or to cultivate them; and after domestication, the use of domesticated cereals remains at very low levels for about 1,000 years. The continuing mix of domesticated and wild cereals may well represent the inability of communities to cultivate enough cereals for brewing or feasting purposes and the willingness to travel whatever distances necessary to supplement stores.

The technological pre-adaptations in the Natufian for brewing display remarkable similarities to brewing techniques, including the preparation of animal greases by bone fragmentation and boiling, as well as the likely preparation of nut oils by mass processing in mortars and boiling. Dehusking with mortars and the grinding of grains was probably also well established given the ground stone assemblages and sickle blades, and it seems probable that the baking of simple breads in the ashes of open fires (as practiced by other hunter/gatherers such as those in Australia) and/or the cooking of gruels with stone boiling were also used as cereal preparation techniques. Preparation of both bread and gruels should have been labor intensive and probably also only used for special occasions or under special conditions.

Similarly, the specific brewing techniques likely to have been used would not have required any more complex technology than was available in the Late Epipaleolithic. In particular, either grinding stones or stone mortars with stone pestles would have been effective for producing the coarse meal most optimal for brewing, while stone or wood mortars could easily have been used as containers for brewing. Appropriate contaminant yeasts could have formed part of the natural context or been introduced as part of the processing of other foods such as acorns, while the likely natural fermentation of other foods such as honey, palm or tree saps, and fruits like grapes would have provided a biological template for cereal fermentation. Thus, there do not seem to have been any significant technological impediments or constraints to the development of brewing in the Late Epipaleolithic.

In contrast to breadmaking, brewing has the benefit of not requiring the laborious full dehusking or fine grinding of all grains and would have provided a more efficient, labor-saving way of using cereal grains. All of the cereals that were first domesticated (rye, einkorn, emmer, and barley) have been shown to be suitable for brewing, although each has particular advantages and disadvantages. Specifically, wild rye has the advantage of high diastatic power and being more easily dehusked but the disadvantage of becoming glutinous during mashing, and, if initially used for brewing in some localities, may have eventually been replaced by wheat, and then by barley (probably after the low starch and high glucan levels had been modified by genetic selection).

Finally, there is good evidence for feasting in the Late Epipaleolithic of the Fertile Crescent. The most complex communities seem to have been complex hunter/gatherers who could be expected to have hosted competitive feasts in which brewed beverages would have been highly valued. We can thus expect that special efforts would have been made to procure or produce grains (and other substances) suitable for brewing and gruel or breadmaking to use in the feasts of this period.

Since this is primarily an exploration of the likelihood of the brewing hypothesis in the Near East, there is clearly much more research that is needed for ultimate confirmation or refutation of the hypothesis. Specifically, residue and DNA analysis of any materials remaining in the interstices of grains at the bottom of stone mortars or fire-cracked rocks would be of utmost importance to undertake. Similarly, a much more detailed modeling of the prehistoric distributions of wild cereals in relation to major Late Epipaleolithic sites in the region, similar to the approach that Willcox (2005) used, would be critical. More information about ethnographic brewing would also be helpful, as would more experiments in brewing wild rye, einkorn, emmer, and barley. Our experiments were only approximations since we did not have access to wild forms of these cereals. Additional excavations of Late Epipaleolithic and early Neolithic sites with these issues in mind would be extremely helpful. We also need more detailed information on the natural sources of alcohol producing yeasts that occur in the region and which could have originally (accidentally) inoculated gruels or other preparations.

Consequences for Domestication

If the natural resources, technology, and social context were all propitious for the development of alcohol production in the Natufian, what implications do these prospects carry for the domestication process? First of all, contrary to the sometimes idealized image of cereal grains being ubiquitous and abundant throughout the Near East, it now appears that in a number of key localities in the Fertile Crescent, cereals were not immediately available. In fact, sites of first domestication are often described as being at the edges or beyond the natural distributions of wild cereals (Hillman et al.2001:389; Willcox 2005:539; 2007:27). For instance, Willcox (2005:539; 2007:27) concludes that the cereals used at the Natufian site of Mureybet were not grown locally, but imported from the north. At Abu Hureyra, there were no wild stands of rye or wheat within 60–70 km, and only limited areas of moister wadi bottoms or slope breaks would have been suitable for cultivation. In other instances of domestication such as rice and millet, it also appears that domestication did not take place in localities most favorable for wild species, but at the fringes of their natural distribution or beyond (Marshall and Weissbrod 2011; Yasuda 2002:131; Hayden 2011b).

Indeed, if cereals had ever been ubiquitous and abundant in the Near East, it becomes difficult to explain why anyone would go to the trouble of growing them or domesticating them there. Moreover, cultivation at suitable sites could take place only after local more aggressive vegetation had been cleared (Hillman et al.2001:387; Willcox 2005, 2007). Hillman et al. (2001:390) also stress that cultivation would have been labor intensive similar, no doubt, to the considerable efforts expended by Northwest Coast groups to clear, spade, plant, and monitor plots of clover and cinquefoil (Deur 2002). The starchy roots of these plants were used in feasts and their value counted, presumably, as an indication of the importance of those foods in feasts and the debts incurred by the consumption of those roots (Boas 1921:541–542; Turner and Kuhnlein 1982, 1983).

Given the constraints on transporting any quantity of grain over 60 km or more, it appears that initial cereal grains were probably special and costly food items of minor subsistence importance in the centers of domestication. Ethnographic examples demonstrate that other cereals such as maize were transported “great distances” beyond their production environments by hunter/gatherers in the northern Woodland cultures of North America, and that these cereals were used primarily in contexts of “gift giving, reciprocity, ritual feasts and ceremonies” rather than for their daily subsistence value (Boyd and Surette 2010:129). Similarly, we suggest that only a special use of grains would warrant the undue efforts that some Natufian groups exerted to obtain these foods, especially given the very rich environments and abundant foods available locally at sites like Abu Hureyra (Hillman 2000:366, 370–371, 384; Moore et al.2000:480). Making beer is one of the few likely candidates for motivations to procure early cereals, and is consonant with the excessive efforts that are expended to make beer even today in traditional societies.

A second consideration for domestication issues is the low productivity, high processing costs for use in gruels or breads, and high risks associated with cultivation of most cereal grains, including wheat, rice, and maize. Gregg (1988:156, 161), Flannery (1969:74), and Hayden (2011b) all emphasize these risks, as well as the high labor inputs required which exacerbate the risks involved in producing and keeping a successful harvest. Risks for early cultivators included drought, insects, diseases, storms, depredations by birds and other animals, theft, spoilage, weed competition, and poor pollination or germination. The cumulative effects of these factors could reduce or eliminate a harvest after considerable amounts of labor had already been invested (Gregg 1988:62–66, 73, 97, 132–134, 156, 161). Rodents alone can result in harvest losses of up to 92%, although the average is closer to 30% (ibid: 97, 134). These considerations support several archaeologists who have noted that the first attempts at agriculture would likely have been frustrating endeavors with little initial lasting success. Flannery (1969: 74) has maintained that early cultivation was low in productivity due to high labor inputs and that domestication neither immediately improved the diet of early farmers nor created a constant food supply. Environmental variances from year to year meant that some years might see a surplus from cultivation while others resulted in only fruitless efforts. Gurven et al. (2010:50) even argue that small-scale, grain-based horticulture typically produces a limited amount of food with rare surpluses and a general need to supplement diets with wild plants and hunting—even when metal tools are used with modern large-seeded crop varieties. The situation must have been even less productive in the initial phases of domestication where less efficient stone tools were used with lower yielding crops.

Rindos (1984:87–88) similarly emphasized the low, unreliable productivity of planting wild cereals and the negative benefits of cultivating wild cereals in areas where they grow naturally. Using economic modeling, Bowles (2011a,b), too, concluded that initial cultivation appears uneconomical since hunter/gatherer caloric returns were 60% higher than simple farmers in his study. Thus, it would appear that initial cultivation would have been a high risk/low return strategy unsuited as a basis for subsistence. It should hardly be surprising that domesticated cereals initially, and for some time, constituted a small percentage of overall diets with most subsistence provided from wild nuts, tubers, greens, fish, and animals. Given a strong but non-essential desire for grains and their products, these factors (especially variable low returns with crop failures) may explain the 3,000-year-long period of co-occurrence of wild and domestic species as noted by Tanno and Willcox (2011).

As with rice cultivation (Hayden 2011b), the added effort, risk, and time involved in cultivation suggest that it would not have been undertaken by the majority of the population or for subsistence or risk buffering purposes. Furthermore, it was likely only a minority who were cultivating. Considering the small areas suitable for cultivation near the major sites on the Euphrates, and the efforts required to clear native vegetation, it would be expected that both the cultivation plots and their products would have been owned by individual families or corporate kinship groups and used for their own benefit2. Given all these characteristics, initial grain cultivation does not make sense as an adaptation to meet basic subsistence needs; however, it does make good sense as a strategy to produce some valued foods (beer, bread, and gruels) as a surplus supplement for special events. In poor years, only the feasting would be foregone. In good years, hosts with surpluses for feasts could benefit handsomely (see Hayden 2009). Thus, the nutritional risk of initial cultivation would have been minimal while sociopolitical benefits (through feasting) could have been considerable without incurring subsistence risks. The same logic also applies to other foods that could have been specially valued, rare, costly, and nutritious such as chick-peas and lentils (Kerem et al.2007). Our arguments about cereal domestication are certainly not meant to exclude the domestication of other species for feasting purposes.

A third consideration for domestication issues is that the Younger Dryas does not seem to have had the extreme adverse impact on Natufian subsistence and society that many authors impute (Balter 2007:1835). Weinstein-Evron (2009:112) disputes that there was any significant impact of the Younger Dryas in the core area of the Natufian. In fact, Natufian groups flourished and expanded during this time into rich Euphrates environments like Abu Hureyra which, as Maher et al. (2011:21) note, would be a curiosity under hardship conditions. Similarly, Nesbitt (2002:124) and Watkins (2010:108) argue that the Younger Dryas played no role in cereal domestication. Willcox (2005 2005, 2007; Willcox et al.2009) notes that cereal grains actually appear to have increased during the Younger Dryas in continental areas of the Fertile Crescent, while the major nut-bearing trees were constant features of these landscapes before, during, and after the Younger Dryas. Wright (1994:253) concluded that the use of cereals did not result from nutritional or climatic stresses. More recently, Maher et al. (2011) have argued that the effects of the Younger Dryas varied widely from place to place in the Mideast, especially with altitude, and that there was no significant impact of this climate change on the cultures of the Near East. If there was any adverse effect of cooler temperatures, it may have made highly valued food items like cereals somewhat more difficult to obtain in certain areas. But as Willcox (2005:538) also notes, it is “difficult to argue in favor of the determinist model of reduced availability brought about by climate change, when gatherers were already gathering at some distance from the sites.” He also notes that people were already transporting raw materials (basalt, obsidian, prestige items) across considerable distances during the Late Epipaleolithic, and that grain appears to have been but one more item in this prestige exchange system. The general similarities of Late Epipaleolithic cultures throughout the Fertile Crescent and the rapid spread of basic technological innovations such as microliths and grinding stones, all attest to high rates of interaction and exchange throughout the region (Richter et al.2011).

A fourth implication for domestication issues is that all the experimental, ethnographic, and comparative observations in this study have tended to support an important role for grains in brewing for feasts. We thus endorse the earlier suggestions made by Sauer, Braidwood, Katz and Voigt, McGovern, and others to the effect that increasing demands for brewing beer was likely a major motivating factor for cultivating and domesticating cereals in the Near East. Similar arguments might also be advanced for breadmaking. This is consonant with Willcox’s (2005:539–540) conclusion that: “Social considerations such as the accumulation of wealth, social stratification, ownership and exchange can no longer be ignored as incentives to cultivate…With increasingly complex village life and regional rivalry for access to wild stands, the adoption of cultivation near a village would be a distinct advantage” (emphasis added; see also Stordeur and Willcox 2009:709). We conclude that feasting and brewing very likely provided a key link between increasing “complexity” and the adoption of cereal cultivation. However, there is still no smoking brew pot, and there are still many details to investigate. Thus, we look forward to the coming years of research in this stimulating domain of research.


The mashed grain that is left over from the mashing process is enriched with proteins, fiber, ash, and lipids and is thus often used as a cheap feed for local animals (Hornsey 1999:41; Briggs et al.2004:166). These grains also have the potential to be used for human consumption as well. One 1986 invention patent recognized the nutritional value of the spent grains, but noted that the husks, which can be sharp and difficult to chew, would need to be removed or rendered down to make the cereal acceptable for today’s market (Gannon 1986). Similar patents, such as Bavisotto (1965) and Chaudhary (1982), have also emphasized the potential nutritional and health benefits of cereal reuse. These claims are supported by scientific studies which address the advantages of proteins from brewing spent grains when they are added to other cereal based recipes (Mussatto et al.2006; Stojcesk et al.2008). These grains are not limited to barley and other typical western brewing cereals. Grains such as sorghum have also demonstrated nutritional potential and benefits (Adewusi and Ilori 1994). Additionally, the yeast produced in brewing, which is in itself rich with vitamins and proteins, is used in contemporary society to produce nutritional supplements (Moyad 2007:561; Wyrick 1944:3).


Kuijt and Finlayson (2009:10966) have interpreted the PPNA granaries at Dhra’ as “being used and owned communally.” However, given their subsequent observation that “many granaries would have been in use simultaneously,” their view of communally owned facilities must be questioned. Such multiple facilities would seem to make more sense as owned by individual corporate, or even household, groups. Ethnographically, it is our impression that communal storage by entire small communities is rare or absent, while there are good examples of corporately owned resources and storage. The obvious PPNA storage facilities at Jerf el Ahmar have also been viewed as communal facilities. However, if these ritual structures were actually the ritual centers of secret societies, as Hayden (2003) has suggested, the stored material would not have been communally owned, but owned by the secret society members, representing one of the first instances of the expropriation of surpluses from resident families by incipient elites. The numerous “small bins” of stone or clay at Jericho and Netiv Hagdud referred to by Kuijt and Finlayson may have been more normal storage facilities for individual households.



We would like to thank the Social Science and Humanities Research Council of Canada for their support of Hayden’s research into traditional feasting, as well as George Willcox for his help and comments on earlier drafts. Dani Nadel, Steve Rosen, and a number of anonymous reviewers were generous with their insightful comments as well. Our gratitude goes to Saul Moran for assisting in the experiments and providing an experienced eye for the brewing process, as well as to Dan Small and Dan’s Homebrewing Supplies for their expertise and product support. Joe Hepburn reviewed the text, and David Gauthier, Kevin Gaetz and Mario Arruda generously contributed their masticating talents.

Copyright information

© Springer Science+Business Media, LLC 2012