Keywords

By now it should be abundantly clear that neither ‘the public’ nor ‘public health’ ever had a fixed or entirely coherent set of meanings. Yet, in the post-war period, the ‘publicness’ of public health seemed to undergo a radical shift. In this chapter, we reflect on both the challenges to publicness in public health but also its persistence. In the first section of the chapter, we concentrate on the increasing focus on individual behaviour as both a cause of disease and its remedy, and the ways in which this posed a danger to collective understandings of the public and its health. Emphasising personal responsibility for health shifted liability from the state to the citizen, from public to private. At the same time, in the second section of the chapter, we suggest that there were all sorts of ways in which publicness was retained, remade and even reinforced. ‘The public’ was increasingly recognised as an important actor in public health policy, practice and research. In the final section of the chapter, we consider developments beyond the dichotomy of ‘public’ and ‘private’ that were nonetheless crucial to changing conceptions of both the public, and public health policy and practice. Issues such as social structure and the environment impacted upon the public’s health, but also influenced how people thought about the public. At the same time, new technologies offered fresh opportunities to create new public spaces and new publics.

Such complexity means we cannot tell a simple story of decline. The public has not ‘fallen’, but it has become penetrated by the private in a range of novel ways. The shifting boundaries of what could be considered public and what could be considered private, the making and remaking of various publics, and the spaces which they occupied, suggests that the public/private divide was increasingly hard to discern. Some might see this as private capture of the public sphere , but there were many ways in which the public, as a collection of people, as a space for action, and as a set of values, continued to matter. The blurring of boundaries between public and private health and the overlapping nature of publics and their interests may present problems of categorisation, but it did not signal the end of publicness in health.

1 Private

Interest in individual behaviour and the supposed ‘privatisation’ of public health were shifts peculiar to public health policy and practice, but they mirrored broader changes within British society and politics. From the late 1970s onwards in Britain, America and other high-income countries, neoliberal ideas began to influence many areas of government policy. Emphasis on individual entrepreneurial freedom, private property, free markets and free trade led to the development of policies that encouraged marketisation, privatisation and the prioritisation of individual wants and needs (Harvey 2007; Rodgers 2012). This led to the ‘rolling back’ of the state in the provision of public services. In healthcare, this was manifested most clearly in attempts to develop an internal market within the NHS and the growing use of private companies and private money to build hospitals and deliver services (Pollock 2005). Central to such ‘neoliberal ’ (broadly defined) approaches was a view of the individual as sovereign. Individuals were thought to be best placed to make choices about their use of services, lives and interests (Le Grand 2007).

The influence of neoliberal ideas and policies paralleled changes in public health policy and practice in two areas. Firstly, in the growing emphasis on the individual, and secondly in the enhanced role for private companies within public health. Although public health policies and practices had intruded frequently into the private sphere, in the post-war period, the linking of chronic diseases to individual behaviour seemed to require a new level of interest in what had been private actions. In this section we consider how chronic disease prevention came to be reconfigured as a personal responsibility and a matter of individual choice. Such a shift was facilitated by the emergence of risk factor epidemiology that was able to calibrate an individual’s risk of developing a specific condition. An increased focus on the individual was not, however, the only manifestation of ‘the private’ within public health. We also examine the growing role played by private companies in shaping the public’s health.

1.1 Personal Responsibility and Individual Choice

Public health policymakers and practitioners had long been concerned with the private actions of individuals and the consequences these had for personal and collective health. Encouraging people to change their behaviour was a central part of early-twentieth-century health education efforts, but in the UK and in other high-income countries, the linking of lifestyle to disease from the 1950s onwards prompted closer interest in individuals and their conduct (Berlivet 2005; Rothstein 2003; Timmermann 2012; Fee and Acheson 1991). In Britain, the work of Richard Doll and Austin Bradford Hill on smoking and lung cancer was especially important in connecting individual behaviour to disease. In his classic text of 1957, the Uses of Epidemiology, Jerry Morris asserted that ‘prevention of disease in the future is likely to be increasingly a matter of individual action and personal responsibility’ (Morris 1957). As the list of behaviours that were thought to bring about ill-health expanded to encompass diet , exercise and alcohol , public health educators changed their approach to communicating with the public about threats to their health. For instance, the 1964 Cohen Report on health education recommended moving away from ‘specific action campaigns’, such as educating the public about vaccination , and towards areas of what it termed ‘self-discipline’, such as smoking , over-eating and exercise (Central Health Services Council and Scottish Health Services Council 1964).

By the mid-1970s, public health policy was increasingly orientated around the idea that individual behaviour was responsible for many public health problems. For example, in 1976, the government’s major report on the public’s health and how to improve it, Prevention and Health, Everybody’s Business, asserted that:

the weight of responsibility for his own health lies on the shoulders of the individual himself. The smoking related diseases, alcoholism and other drug dependencies, obesity and its consequences, and the sexually transmitted diseases are among the preventable problems of our time and in relation to all of these the individual must choose for himself. (Department of Health and Social Security 1976, 38)

Similarly, a few years later, in 1988, the Acheson report into the functioning of public health in England noted that:

in recent years there has been a significant shift in emphasis in the perception of the determinants of the health of the public. In the context of the rise in importance of such conditions as cardiovascular disease and cancer , this now focusses far more than before on the effects of lifestyle and on the individual’s ability to make choices which influence his or her own health. (Cm 289 1988, 2)

The role of public health authorities, was, according to public health practitioners John Ashton and Howard Seymour, to ‘help make healthy choices the easy choices’ (Ashton and Seymour 1988, 22). This could be achieved through regulation and legislative controls, but more often than not it was seen as the task of health education or health promotion.

As noted throughout the book, health education campaigns in the post-war period were frequently targeted at getting individuals to change their behaviour. Whether it was healthy eating, alcohol consumption or cigarette smoking , individuals were encouraged to take responsibility for their health by choosing an appropriate course of action (Hand 2017; Mold 2017; Berridge and Loughlin 2005). Such a view was predicated on a particular kind of self—an autonomous individual capable of self-government in response to expert advice (Miller and Rose 1990). People could choose to respond to illness or maintain their health within a broader culture of ‘healthism’ that situated the problem of sickness at the individual level (Armstrong 2009; Crawford 1980). As Deborah Lupton suggests, ‘Healthism insists that the maintenance of good health is the responsibility of the individual, or the idea of one’s health as an enterprise … Healthism represents good health as a personal rational choice, “a domain of individual appropriation” rather than a vagary of fate’ (Lupton 1995, 70). A focus on individual behaviour resulted in a conception of the public as a collection of self-governing rational actors able to respond to public health messages and change their behaviours accordingly. The role of the state, from a neoliberal perspective, was to facilitate the entrepreneurial actions of individuals rather than to create the broader social, economic and political conditions for good health (Ayo 2012). For Lupton, ‘The concept of health as it is employed in contemporary public health and health promotion thus tends to individualise health and ill-health states, removing them from the broader social context’ (Lupton 1995, 71). Under such logic, health, while not solely a private affair, was the prime responsibility of the individual not public authorities or actors. As we discuss in greater detail below, this view was not the only one in operation, but it was one that appeared to hold increasing appeal to governments of various political hues from the 1980s onwards.

1.2 Individual Risk

As discussed in Chapter 3, the emergence of risk -factor epidemiology and its comparison between the personal characteristics of groups to calculate probabilities of disease led to a focus on individual risk in the immediate post-war era. Nowhere was this more clearly articulated than by Morris , who noted that ‘risks, chances and probabilities for the individual can be predicted, on average, from analysis of the collective experience of large numbers of representative individuals with the characteristics in question’ (Morris 1957, 1955). These techniques, in part derived from the insurance industry, informed a ‘new style of explaining cause and responsibility’ in epidemiology , and, by extension, public health (Rothstein 2003; Aronowitz 2011; Giroux 2013).

The historian Dorothy Porter has used the figure of Morris as an avatar for post-war public health, arguing that Uses of Epidemiology provides evidence for Morris’s declining interest in social class as an explanation for disease distribution, and that this ‘allowed the deconstruction of the complexity of the social and biological relations of chronic diseases through the identification of “ways of living” as their primary cause’. From this, it followed that public health ‘was able to offer the opportunity to prevent illness by changing social and individual behaviour’ (Porter 2007, 82). Porter suggests that post-war public health authorities pursued ‘a new hegemonic mission for preventive medicine that looked to reform personal and social behaviour rather than the reform of social structure as the route to a healthy society’ (Porter 2002). While this individualisation of risk might have been most visible in exhortations for behaviour change, it was also evident in debates around the introduction of screening programmes for disease from the 1960s onwards. In some instances, this was a matter of the public themselves claiming the right to access the technologies of detecting as yet undiagnosed illness as a means of prevention , as with women’s groups’ campaigns for cervical cancer screening (Lowy 2011). For other diseases with multifactorial causes such as heart disease , however, some advocates viewed ‘regular health examinations’ as a potential means to provide tailored individual lifestyle advice to ‘symptom-free individuals at high risk’ (Turner and Ball 1973). However, the diagnostic accuracy of screening tests was a major barrier to such ambitions, with even contemporary comment in The Lancet conceding that:

Identification of a susceptible individual has never really been achieved except through a gross averaged assessment of accumulated risk factors … a prediction within a high risk group on the basis of multiple factors still produces incorrect forecasts more often than correct ones. (Anon. 1974)

Indeed, debates on screening in the medical literature were largely predicated on technical issues of the sensitivity and specificity of any proposed tests, but resources were also considered an issue (Wilson and Junger 1968). Nonetheless, with the introduction of breast cancer screening in 1988, such programmes became an integral part of contemporary public health, with schemes addressing a wide number of cancers, maternal and new-born issues, as well as genetic conditions (Gov.uk, n.d.). In 2009, the NHS Health Check was introduced in England, with the objective of spotting early signs of stroke, kidney disease, heart disease , type 2 diabetes and dementia in adults aged 40–75, although take-up among the population was lower than anticipated (Clark et al. 2018). Although ascertaining and reducing individual risk of developing a chronic condition became crucial to public health practice, individuals themselves still needed to be persuaded of its benefits.

1.3 Private Companies and Public Health

An increased focus on individuals and their health was not the only way in which boundaries between public and private were redrawn in the post-war era. While in Porter’s words, ‘[p]opulation health has always depended on collective provision of social welfare’ , private corporations began to take a significant interest in public health in Britain from the 1960s onwards (Porter 1999, 6). This interest took a number of forms, and was sometimes in partnership with state-based public health, and sometimes quite distinct from, and even in opposition, to it. Some companies used the research of epidemiology and discourse of public health education to help sell their products, while others sought to encourage their employees to practice healthier lifestyles . By the end of the twentieth century, some firms had even moved to providing established public health functions such as screening to their prospective customer base.

Historian Jane Hand has explored how Unilever, the ‘leading food producer in post-war Britain’, used emerging epidemiological evidence about the possible role of saturated fats in causing heart disease , and ‘seized on societal reactions by engaging at-risk individuals as key agents of behavioural change, empowering them through consumption and the commodification of disease prevention ’ (Hand 2017, 482). Its advertising campaigns for Flora margarine targeted middle-aged men, as well as the housewives purportedly responsible for their husband’s diet . For Hand, ‘[n]ot only were notions of “selling” health central to programmes of popular education at the behest of government but they also formed important components of marketing agendas by the food industry’ (Hand 2017, 488). Indeed, in 2007 Unilever went further, offering blood cholesterol screening as part of its ‘Test the Nation’ campaign, using an established public health technique to market its product, conducting 72,000 tests and apparently adding £5.9m in sales value to Flora pro.activ (Unilever, n.d.).

As well as aligning themselves with public health discourses, corporate interests also sought to influence them. A snapshot of these efforts is provided by evidence given to a parliamentary inquiry into preventative medicine in the mid-1970s. While many witnesses who worked in public health had criticised elements of the food and tobacco industries, evidence provided by individuals representing such corporate interests demonstrated a more complicated and conflicted relationship between the two factions. For example, the Tobacco Research Council revealed that they continued to fund a number of epidemiological studies, including the Whitehall study of civil servants led by Donald Reid, who had given evidence to the inquiry on behalf of the London School of Hygiene and Tropical Medicine (Expenditure Committee 1977, 805). Alongside providing funding for such research activities, corporations were also keen to use epidemiological data to bolster their own arguments regarding prevention , especially when it might dovetail with the promotion of their products. The manufacturer of Flora margarine repeatedly lobbied the inquiry and government ministers to endorse poly-unsaturated fats, as well as noting that Jerry Morris, in giving evidence, had mentioned its product by name (GI Grant, letter to A Milner-Barry, n.d.). Other food industry bodies such as the National Dairy Council highlighted uncertainty in the research literature to try to prevent the committee forming adverse opinions about milk, butter and their potential link to Coronary Heart Disease (CHD) (National Dairy Council 1976). Finally, the John Lewis Partnership highlighted the preventative work they were conducting with their employees, offering screening for cardiovascular disease and fitness campaigns for staff, a model that would be adopted as part of the ‘Look After Your Heart’ campaign in the 1980s (WM Dixon, letter to A Milner-Barry, 2 January 1976). The boundary between ‘private’ companies and ‘public’ health efforts, was therefore, increasingly blurred.

2 Public

Despite the increased focus on private actions and private companies with public health, there were also a number of ways in which publicness was retained and even strengthened. Just as neoliberalism was unable to entirely erode social democracy in other sectors of British politics and society, large parts of what was thought of as related to ‘public health’ remained ‘public’ (Vernon 2017, 476–516). This can be seen in the extent to which ‘the public’ and its needs became a topic of both political and research interest. In this section, we consider how calls for increased public representation in healthcare impacted upon public health policy and practice. We also look at how social scientists attempted to create a picture of the health of the whole public through the development of the concept of ‘population’ and the representative survey.

2.1 Representation

From the 1960s onwards, there was increasing pressure for greater public representation within healthcare in Britain. Some of this momentum came from patients and the public (Mold 2013). Patient organisations and other voluntary groups demanded a say in their own treatment and the shape and direction of health services. By the 1970s, the government recognised that there was a need to take patient and public views into account. Following the reorganisation of the NHS in 1973 Community Health Councils (CHCs ) were set up in every local health authority in England (similar mechanisms were put in place for Scotland, Wales and Northern Ireland). These were intended to be the ‘voice of the consumer ’ within the NHS (Joseph 1973). Although the CHCs were highly variable in their effectiveness, they did provide a means through which public views on health services (including public health) could be heard (Mold 2015, 42–68; Hogg 2009). During the 1980s, the number of organisations claiming to represent the opinions and needs of patients and the public in health grew considerably (Salter 2003; Wood 2000; Baggott et al. 2005). At the same time, the figure of the patient, and particularly the ‘patient-consumer ’ grew in political saliency as first the Thatcher , and then Major governments made substantial changes to the organisation of the NHS . The introduction of the internal market in 1989 and the establishment of the Patient’s Charter in 1991, were supposed to give patients greater rights within a more consumer -orientated service (Mold 2012, 2015, 94–116). Although the precise mechanisms for achieving patient and public representation within healthcare changed frequently following the scrapping of the CHCs in 2003, the imperative for such representation did not go away. An ‘alphabet soup’ of organisations were created to provide public representation in health, although questions can be raised about their effectiveness (Forster and Gabe 2008; Hogg 2009; Baggott 2005). At the same time, representation often focused on healthcare and the NHS , rather than public health or public health services per se (Baggott and Jones 2011). Indeed, public representation within public health services and practices and was especially difficult. In part, this was a result of the changing position of public health services within the health system, but also because it was difficult to provide opportunities for representation across a wide range of services, issues and areas.

That does not mean, however, that the public went unrepresented within public health. One way in which this was achieved was through health surveys . Surveys were a technology through which public needs could be shaped and represented by public health authorities, but they could also act as an arena for the articulation of certain needs by different publics, providing evidence to support demands. In the late 1960s, the Department of Health and Social Security (DHSS ) commissioned a survey from the Government Social Survey (GSS ) department on the experiences of the ‘chronic sick and physically handicapped’ living in Britain. This survey aimed to chart the numbers of people in Britain living with disability or chronic illness and the effects their conditions had on their lives. The DHSS intended to use the information gathered in the survey to shape its policy decisions regarding the benefits made available to disabled people.

In rethinking the provision of services available to disabled people, the DHSS was not operating in a vacuum. In 1965, the publication of Peter Townsend and Brian Abel-Smith’s The Poor and the Poorest represented the high point of a ‘rediscovery of poverty’ in Britain that had been building from the late-1950s onwards (Hampton 2016). As it became clear that the welfare state had not eradicated poverty, especially in neglected demographics such as elderly and disabled people, a ‘poverty lobby’ emerged composed of voluntary organisations which campaigned on behalf of these disadvantaged sections of society (Oliver and Campbell 1996; Whitely and Winyard 1987). This included organisations such as the Disablement Income Group (DIG) which was also established in 1965. DIG was the first advocacy group formed and led by disabled people. It was set up by two disabled housewives in Godalming, Surrey, after they wrote a letter to the Guardian women’s page (Stott 1989, 77; Hilton et al. 2013, 61). Initially formed to highlight the difficulties of disabled women, DIG campaigned for a non-contributory ‘National Disability Income’ for all disabled people based on need (Millward 2015, 275–76; Hampton 2016, 88). The DHSS was very conscious of the political climate surrounding its investigations into disabled people’s lives and needs. A draft paper from the Ministry of Social Security in spring 1968 wrote of the ‘considerable volume of complaint about the present arrangements’ and noted the ‘sustained and heavy pressure … exerted’ by ‘the Disablement Income Group … the National Council for the Single Woman, the National Campaign for the Young Chronic Sick, the Press, and Members of Parliament of all Parties’ (DHSS , Proposed Survey of the number and needs of chronic sick and handicapped people, 1968).

Although the survey was largely developed by the GSS in conference with the DHSS and Margot Jefferys from Bedford College, the DHSS also consulted DIG (Harris 1971). DIG was given a draft copy of the questionnaire, and asked to provide evidence and advice based on a recognition that it had a ‘unique expertise … in knowing where disabled people lived and what questions might be pertinent to the issues they faced’ (Millward 2015, 284). Although the ‘public’ nature of DIG is somewhat contested—the group’s patrons included senior academics and politicians, and from 1969 onwards their leaders were well-educated and included ex-civil servants—this was a case of public pressure being exerted on the Government in a moment of reform (Millward 2015, 278). Through campaigning, DIG’s expertise was recognised, and they were able to shape the Government’s research to make sure the survey asked questions which articulated their needs and those of other disabled people in Britain. The survey findings were later used in the development of an Attendance Allowance, Invalidity Benefit, and the 1975 Social Security Benefits Act which created benefits of disabled housewives (Millward 2015, 280). In this way, a disabled public contributed to its remaking through the mechanism of the public health survey.

Although ‘the public’ was increasingly recognised as an important actor in public health research, not all publics were recognised in the same way or given the same platform. By the 1980s, people of colour were utilising the survey to fight against health inequalities and discrimination, and to draw attention to issues they felt were otherwise left off the agenda of public health. One example of this was Elizabeth Anionwu and Usha Prashar’s research into screening and counselling facilities for sickle cell anaemia which found services to be lacking and raised the issue of racial discrimination. In Sickle Cell Anaemia: Who Cares? Anionwu and Prashar wrote of an indifferent public health service. They suggested that sickle cell disorders had been seen as ‘rare tropical illness and of little significance in Britain’, with the consequence that health officials received little information about them in their training (Prashar and Anionwu 1985, 8). Although the Organisation for Sickle Cell Research (1975) and the Sickle Cell Society (1979) ‘constantly attempted to highlight the problems facing’ those with sickle cell disorders (SCD ), Anionwu and Prashar argued that little had been done by public health. Sponsored by the Runnymede Trust, their survey investigated what services were available to SCD patients within the NHS . The survey found that services were ‘ad hoc and patchy’ with ‘no central guidelines and … no resources … made available either centrally or locally’ (Prashar and Anionwu 1985, 50). SCD had been left out of the 1975 NHS Resource Allocation Working Party (an attempt to redistribute resources within the NHS ) formula (Gorsky and Millward 2018). Anionwu and Prashar used these findings to argue that funds should be ‘allocated centrally for the development of comprehensive care for SCD and that firm guidelines [be] issued on the principles, aims and structures of such care’ (Prashar and Anionwu 1985, 51). Following this, Anionwu’s Ph.D. research explored health education and community action around sickle cell anaemia in the London borough of Brent and brought to the fore the ‘harrowing experiences’ of BME parents she surveyed whose children had been diagnosed with it. Despite her focus, Anionwu wrote that it was:

vital to stress that this thesis is not advocating that black health workers restrict themselves to specific conditions relevant to black people . Singling out particular conditions for attention, such as SCD or rickets has rightly been condemned in various quarters as providing an opportunity for health authorities to side-track the most important issue, that of institutional racism. (Anionwu 1988)

Anionwu continued to centre the effects of racism, institutional and otherwise, on the health and healthcare of people of colour in Britain in her work. Later she advocated for a national register for SCD , suggesting that more accurate information regarding the estimated national prevalence of the condition might strengthen the case for support (Anionwu and Atkin 2001, 2, 121). In this way, when government-led research was not forthcoming, publics created and called for surveys of their own to support demands for services which they considered neglected by public health.

2.2 Population

The survey was not just a tool for specific publics to get their needs onto the agenda. Public health authorities’ expansion and interpretation of statistics, including the use of the representative survey, played a vital role in determining how population health was viewed by policymakers and what actions should be taken to improve it (Szreter 2002). A new, more comprehensive conception of the public as objects of, and participants in, research and the governance of health began to take hold (Crook 2016, 295). At the heart of the representative survey is the notion that ‘the part can replace the whole’, but, as Alain Desrosières suggests, the idea of representativeness forces the questions: ‘What is part? What is whole?’, and in turn demands definition of the population, or ‘whole public’, in which the two are firmly linked (Desrosières 2010). In post-war Britain, through surveys and representative sampling, public health consistently used parts of the public to represent the whole population. But the questions asked of these representative members of the public and the categories they were subsequently sorted into changed, thus altering the conception of the ‘whole public’ as well as reflecting a public that had also changed.

The clearest example of this can be seen in Britain’s whole-population birth cohort studies, starting with the National Survey of Health and Development in 1946, and followed successively by the National Childhood Development Study in 1958, the 1970 British Birth Cohort Study, and the UK Millennium Cohort Study in 2000. In 1946, the National Survey of Health and Development questionnaire asked ‘mothers’ to categorise themselves through questions about size of the household and occupation of the baby’s father (Royal College of Obstetricians and Gynaecologists and the Population Investigation Committee 1946). Of the initial 13,687 responses to the maternity survey, 5362 children were sampled for follow-up including ‘all single births to married women with husbands in non-manual and agricultural employment and 1 in 4 of all comparable births to women with husbands in manual employment’ (Wadsworth et al. 2006). In the 1958 and 1970 studies, all survivors from the original sample of all babies born in the study week were selected to remain in the study, including those born to unmarried women (Power and Elliott 2006). The 1958 and 1970 questionnaires each asked the participant mothers about their husband’s occupation, and their own work before pregnancy, though the 1970 questionnaire afforded these equal weight, whereas the 1958 version positioned the mother’s work as more of an afterthought (Centre for Longditudinal Studies, n.d.). Like the 1946 study, the 1958 National Child Development Study Birth Cohort did not ask its participant mothers for their ethnicity . The 1970 British Birth Cohort Study’s first survey also contained no questions about ethnicity . By 2000, however, the Millennium Cohort sample was constructed to be ‘representative of the total UK population’, but ‘certain sub-groups of the population were intentionally over-sampled, namely children living in disadvantaged areas, children of ethnic minority backgrounds and children growing up in the smaller nations of the UK’. The disproportionate representation of these groups was to ensure that ‘typically hard to reach populations’ were ‘adequately represented’ and that sample sizes were ‘sufficient for … analysis’ (Connelly and Platt 2014). Study members were asked a much broader range of questions than previous surveys : ‘Are you married to the baby’s father/ separated/ divorced/ just friends/ not in any relationship?’, ‘How many hours of child care do you pay for each week?’, ‘Did you have any medical fertility treatment for this pregnancy?’ (Pearson 2016; Centre for Longditudinal Studies, n.d.) While these questions reflected changes to lifestyle, and changes in the interest of the surveyors, they were ultimately facilitated by new technology: questionnaires were completed on computers. A greater number of questions could be included than on cumbersome paper schedules, and the data gathered could be more easily organised (Pearson 2016, 264).

Neither the 1946 study nor the 2000 study encompassed a ‘whole public’ physically living in Britain, instead their study population selections reflected the continuities and changes present in the interests of the public health surveyors, albeit with carefully ‘adjusted analyses to provide accurate prevalence estimates and robust standard errors’ (Connelly and Platt 2014, 1719). In 1946, the concern of the Population Investigation Committee-commissioned study was over British birth rates, and this was reflected in the sampling of white British children born to married parents (Pearson 2016, 19). In 2000, the surveyors were still driven to understand how disadvantage in early life affected health and development in later years, hence the over-sampling of children living in poverty , but they had a new interest in Britain’s growing minority ethnic population, and so over-sampled children from those backgrounds as well (Pearson 2016, 261). As Desrosières suggests, although these statistical methods were originally developed by eugenicists , what is remarkable is that the ‘techniques quickly became … almost obligatory checkpoints for proponents of other views’. Statistics continued to ‘structure … the very terms of debates, and the language spoken in the space of the politico-scientific debate’ (Desrosières 2010, 329). As such, the ‘whole public’ represented through public health surveys tended to reflect the parts of the whole which were of particular interest to the surveyors and to public health, while also mirroring changes in society.

Some of the tensions between the individual and their relation to the wider population were explored in the late 1980s by Geoffrey Rose, a researcher on the Whitehall studies , but also a vastly experienced and well-respected figure in epidemiological circles internationally. His seminal paper ‘Sick individuals, sick populations’ was a key intervention at a time when questions were being asked about the role of prevention , health education and health promotion. While in the 1970s there had been a widespread consensus on the principle of prevention , by the middle of the 1980s this had splintered. Rose clarified many of the conceptual issues, using examples from his own research. Firstly, he outlined how epidemiologists were able to find out which individuals were at high risk of disease, by examining their differential exposures to a particular risk factor. But if that risk factor were common—for example, if everybody smoked twenty cigarettes a day—then it would be very difficult to work out what that risk factor or behaviour was, because everyone’s exposure would be the same, and incidence of disease would only vary based on individual genetic susceptibility. The epidemiologist would instead have to turn to comparing different populations—for example, British civil servants and Kenyan nomads—who had entirely different rates of the same disease and work out what exposure was common in one group but not in the other (Rose 1985).

While on its own this might seem the type of insight that could appear in an epidemiology textbook, Rose argued that this had much wider implications about societal disease prevention . His view was that up until this point, public health policy had been too fixated on the identification of ‘high-risk ’ groups. While this approach had its merits (and he pointed to the relative success of the smoking cessation randomised controlled trial in Whitehall I), and could potentially be very motivational for the individuals concerned, Rose identified some problems, which he argued were particularly salient for a ‘mass disease’ like CHD . Firstly, that any screening programme would inevitably miss ‘borderline’ cases who might have also benefited from whatever intervention was available. Secondly, and more significantly:

[screening ] is palliative and temporary, not radical. It does not seek to alter the underlying causes of the disease but to identify individuals who are particularly susceptible to those causes … it does not deal with the root of the problem, but seeks to protect those who are vulnerable to it; and they will always be around. (Rose 1985, 36)

Rose insisted that this problem was particularly acute for heart disease , Britain’s biggest killer. Because it was so common in such post-industrial countries, it was difficult for screening to discriminate between low and high-risk individuals. Rose personalised this dilemma:

I have long congratulated myself on my low levels of coronary risk factors … [t]he painful truth is that for such an individual in a Western population the commonest cause of death – by far – is coronary heart disease ! Everyone, in fact, is a high-risk individual for this uniquely mass disease. (Rose 1985, 37)

The implications of this could be read in two ways. On the one hand, it could be argued that prevention was ‘everybody’s business’, as the government green paper had suggested a decade earlier (Department of Health and Social Security 1976). However, Rose was sceptical about highlighting individual responsibility for disease prevention , arguing that ‘[e]ating, smoking , exercise and all our other life-style characteristics are constrained by social norms’. Public health should therefore be concentrating on shifting social norms, or better still, ‘remov[ing] the underlying causes that make the disease common’ (Rose 1985, 37). Nonetheless, Rose acknowledged that prevention at the population level had some drawbacks, the most problematic of which was the ‘prevention paradox’. This he summarised as a ‘preventative measure which brings much benefit to the population offers little to each participating individual’, a predicament that he claimed had been ‘the history of public health – of immunization, the wearing of seat belts and now the attempt to change various life-style characteristics’ (Rose 1985, 38).

Rose’s unsparing appraisal of the contradictions of lifestyle public health and its focus on the individual was his ‘big idea’ (Hofman and Vandenbroucke 1992). It used insights from the Whitehall study to reason that trying to change people’s behaviour without changing the circumstances in which they practice that behaviour was at best only ever going to be partially successful. In his later book, The Strategy of Preventive Medicine, Rose expanded this critique to get to the core of how he thought public health should view itself:

in order to grasp the principles of public health one must understand that society is not merely a group of individuals but is also a collectivity … Society is important in public health because it profoundly effects the lives and thus the health of individuals. (Rose 1992)

Over the next two decades Rose’s idea would be endlessly debated, critiqued and celebrated in epidemiological and medical journals (Charlton 1995; Færgeman 2005; Doyle et al. 2006). He was successful in prompting a fundamental questioning of the tenets of the prevailing paradigm of lifestyle public health. Using his experience from the first Whitehall study, Rose had argued that focussing on the individual’s susceptibility to a risk factor was only half the story; public health had also to consider the risk factor itself. Furthermore, the way in which society promulgated norms, and indeed organised itself, had health effects for individuals. There was, then, no simple divide between public actions with private health effects, or private actions with public health effects.

3 Beyond Public and Private

In the post-war period there were a number of important shifts that had a significant impact on public health that could not be easily categorised as either ‘public’ or ‘private’. The complex relationship between social structure, and especially social inequality, in health outcomes illustrates the interaction between public and private in shaping individual and collective health. Moreover, as we discuss in this section, there were other factors that also played a role in determining population health, such as the environment. Towards the end of our period, novel technologies offered new possibilities for re-imaging the public or publics and their health. Seeing beyond the public/private divide allows us to rethink both.

3.1 Social Structure

Writing in The Lancet in 1986, Alex Scott-Samuel, a public health doctor in Liverpool, argued that ‘social inequalities in health [were] back on the agenda’ (Scott-Samuel 1986). In part a concise summary of the depth and breadth of health inequalities research that the 1980 Black Report had sparked, it also pointed to the impression that up to this point, structural and socioeconomic determinants of health had been somewhat neglected as public health policy pursued an individualistic lifestyle agenda in post-war Britain. While some commentators have challenged this view, there can be no doubt that the 1980s saw a resurgence of research interest into differentials in health (Macintyre 2002). Scott-Samuel’s article noted ‘the number of “local Black reports” by both statutory and community agencies’ that had been produced, as well as placing the Whitehall I research alongside recent work by sociologists and social policy researchers such as Mildred Blaxter, Julian Le Grand and David Blane. Blaxter’s research primarily concerned health service use and the intergenerational effects of poverty and inequality, while Le Grand wrote extensively on structural and fiscal explanations for inequality (Blaxter 1983; Blaxter and Paterson 1982; Le Grand 1978; Muurinen and Le Grand 1985). Both offered correctives to suggestions that disparities in health were either a product of people’s lifestyles , or merely a matter of statistical artefact. Blane focused on the Black Report as a means to address some of the main criticisms that had been levelled at health inequalities research. His analysis identified four principle explanations for the disparities that Black had identified: artefact; selection; cultural or behavioural; and materialist (Blane 1985).

The artefactual critique referred to the possibility that the way in which the five social classes had been defined by the Registrar General since 1913 might firstly be too crude to accurately describe the complexities of class and its relation to occupation, and secondly, that ‘the workforce in semi and unskilled manual jobs is shrinking as such work is increasingly mechanised and automated … newer [younger] recruits to the workforce must move into skilled or white-collar jobs’ (Blane 1985, 424). The consequence of this was that disparities in mortality would be exaggerated because older people would be overrepresented in lower social classes. While this explanation could be relatively easily eliminated by adjusting for age in statistical models, Blane also cited Whitehall I as being instrumental in rejecting this explanation, as the clearer hierarchical divisions present in an otherwise ‘homogenous industry’ than in society at large meant that rank in itself was plainly a factor.

The narrative of selection posited that healthier people were likely to move up the social classes. Critics suggested that health inequalities research had been too static in its analysis, failing to take into consideration the longitudinal effects of social mobility. Indeed, this critique stretched back to the 1950s and the sociologist Raymond Illsley’s work on infant mortality (Illsley 1955; Stern 1983). Blane conceded that selection might well be a ‘real phenomenon … [but] data suggest that it is small, and that even this is limited to certain age groups and parts of the social structure’ (Blane 1985, 431). The Whitehall researchers had also been alive to this issue; Michael Marmot had lamented privately that ‘when examining the relationship of grade to mortality in the original Whitehall Study , we had no information on job histories’ (M Marmot to AM Semmence, 7 August 1980). Whitehall II attempted to address this by maintaining regular contact with the civil servants, and asking questions about their employment history to provide a more rounded picture of how they might move up or down hierarchies. Ultimately, Blane was insistent that ‘[o]nly materialist explanations can simultaneously account for both … the improvement in general health and the maintenance of class differences in health’ observed in post-war Britain. Importantly, he argued that lifestyle explanations could also be subsumed into this analysis, again citing papers from the first Whitehall study to bolster his arguments. Individuals made choices constructed by their socioeconomic circumstances: ‘behaviour cannot be separated from its context’ (Blane 1985, 434).

Blane’s assessment of the Black Report provides a snapshot of several of the key issues in health inequalities research during its boon in the 1980s. His analysis of the evidently contested nature of health inequalities research, and the theoretical challenges directed at it, prefigured the controversy that would engulf Margaret Whitehead’s The Health Divide, published in 1987 and widely viewed as the follow-up to the Black Report (Townsend et al. 1982; Berridge 2002). Blane’s recollections elsewhere of his experiences in the early 1980s also reveal the tight professional and educational links between many of these researchers (Blane 1985, 16). Similarly, John Fox, a statistician whose work contributed to the Black Report, recalled that:

I think that there was more research done in the 1980s on health inequalities than at any other time … [there] was a background for lots of people supporting each other, strong networks building up, which didn’t exist before that time. (Berridge 2002, 168)

The flame of health inequalities research continued to be carried throughout the 1990s most prominently by the Whitehall II study, which reported in 1991 that their ‘findings show[ed] that socioeconomic differences in health status have persisted over the 20 years separating the two Whitehall studies’ (Marmot et al. 1991). From this juncture onwards, the Whitehall studies began to become a byword for health inequalities in public discourse. Michael Marmot cannily used evidence from Whitehall in the publication of the Acheson Report on health inequalities in 1998, and would lead his own review of the issue in 2010 (Acheson 1998; Marmot 2010). In recent years, popular books from Richard Wilkinson and Kate Pickett, social geographer Danny Dorling and economist Joseph E. Stiglitz have also helped to highlight the issue to politicians and policymakers (Pickett and Wilkinson 2010; Dorling et al. 2015; Stiglitz 2013) to the extent that ahead of the 2010 election then Conservative leader of the opposition David Cameron ambitiously (if ultimately fallaciously) promised to ‘banish health inequalities to the history books’, arguing they were one of the ‘most unjust, unfair and frankly shocking things about life in Britain today’ (Bowcott 2010).

3.2 The Environment

Health inequalities were, of course, not a new factor in influencing collective health, and other, older, factors such as the environment, also had an enduring legacy. Notions of the environment were central to nineteenth-century public health practice, but in the middle part of the twentieth century, as public health’s gaze turned towards the individual, the perceived influence of space and place on health altered. Some, more traditionally ‘environmental’ concerns did persist. Medical Officers of Health, for example, retained their role in policing environmental health, a responsibility they took seriously (Jackson 2005; Thorsheim 2009; Corton 2015). During the 1950s, air pollution posed a particular threat to public health. The burning of coal significantly reduced air quality, and in 1952, the Great Smog led to thousands of deaths, especially in London (Berridge and Taylor 2005). As a result, the Clean Air Act was introduced in 1956. The Act required those living in smoke control areas to burn smokeless fuels in their homes (Berridge and Gorsky 2012). Yet, in some ways, air pollution was the exception that proved the rule: the environment mattered as a political issue, but it was seen as a separate topic, one that had little to do with public health (Berridge 2007, 208). For Berridge, ‘The environment had been almost entirely absent from the redefined public health ideology that had emerged in the 1970s … New concerns about occupational health or about environmental pollution had no particular connections with public health’ (Berridge 2007, 208).

Yet, by the 1980s, there were signs of a different understanding of the environment and its impact on health. David Armstrong argues that the environment was seen as posing two kinds of threat. The first concerned the interaction of bodies with nature including new environmental dangers such as ‘noxious gases from car exhausts in the air; chemicals from aerosols in the ozone layer; acid rain from industry in the water and pollution in the soil …’. The second concerned the dangers that bodies themselves posed, or more accurately the behaviours of particular bodies. The AIDS epidemic, Armstrong asserts, existed ‘within a context of wider social activities: at one level the problem is conceptualised in terms of the socialising patterns or culture of gay men; at another it is the complex social interactions involved in needle sharing and blood transfusion’ (Armstrong 1993, 405). The revival of infectious disease as a public health issue in the form of HIV/AIDS, as Berridge also points out, required public health policymakers and practitioners to reconsider the role of the environment. Indeed, the environment came to be seen as crucial not just to the aetiology of communicable disease, but to chronic disease too. Passive smoking , for example, brought together a concern for their individual and their behaviour with the physical environment and the generation of second-hand smoke (Berridge 2007, 208–40). By the late 1990s, the conceptualisation of the ‘environment’ and its impact on the public’s health appeared to have widened still further. Public health issues such as obesity were increasingly depicted in environmental terms. The notion of the ‘obesogenic environment’, or the response of normal physiology to an abnormal environment, relied upon a notion of the environment that is both physical, but also economic and sociocultural (Egger and Swinburn 1997). In some ways, then, ‘the environment’ has come to be a synonym for the structural influences on the public’s health, but one that places emphasis not just on socioeconomic structure, but on space and place too.

3.3 New Spaces

Changes in information and communication technology in the last decades of the twentieth century created new spaces that allowed for new forms of interaction between publics and public health authorities. One of the most radical changes in communication in recent years is the arrival of the World Wide Web. Although publicly released in 1993, it was not until the turn of the millennium that this technology became widely accessible to British people (UNdata, n.d.). Nevertheless, as usage increased, public health authorities had to grapple with the speed and volume of communication that was now possible. ‘The internet ’—as a technology, an information store and the cultures built around its use—was a new space for the making and remaking of both the public and private spheres and associated actors, something historians are beginning to get to grips with (Abbate 2017; Turner 2017). Although the internet only appeared towards the end of our period, it was already becoming crucial in assessing what the public was, how it expressed itself and how it was understood by public health authorities.

One of the most prominent ways in which the internet figured in recent debates about public health was during the crisis around the Measles Mumps and Rubella (MMR ) vaccine (c. 1998–2004) (Speers and Lewis 2004). Anti-vaccination and MMR -sceptic information was shared through static web pages online, but there is clear evidence that parents with internet access were sharing this information with others in their peer group (Selway 1998). Such interactions demonstrated that many General Practitioners were not vaccination experts, and so were unable to adequately counter detailed and specific complaints about vaccinology, epidemiology and alternatives to the triple-dose vaccine (Petrovic et al. 2001). Parents also expressed uncertainty as a result of being presented contradictory information from doctors, the government, the press, fellow parents and the internet . In response, the Department of Health made a concerted effort to use this new technology to educate both parents and health professionals about the importance of MMR to the public’s health in 2002. Utilising recent developments in risk communication tools, the website ‘MMR : The Facts’ and specific guidance issued through the Department of Health’s webpages for health professionals directly answered the ‘myths’ and counter-arguments offered by vaccine sceptic groups (Department of Health 2002a, b). The internet thus offered an outlet for non-traditional voices to express concerns and challenge expertise , but it was also a forum for expertise to speak back to its critics and tailor its responses to those publics most susceptible to these counter narratives.

As internet usage increased, other sources of public behaviour and interaction with the internet beyond vaccination should be considered. The traditional press, often in a mocking tone, highlighted the public’s use of websites such as WebMD to self-diagnose and suggested that this might be exacerbating latent hypochondria in the population (Baxter 2013). Similarly, the cultural reception of MumsNet, a community predominantly of mothers sharing parenting advice, questions and frustrations, tells us much about attitudes towards motherhood. The site itself, however, could be a rich vein of data about the sorts of health issues parents were concerned about over time, since it includes not just edited blog posts but also forum contributions and responses from mothers themselves. These have been articulated and preserved in a way that few pre-internet age sources can provide.

Finally, we should also consider not just how technology can act as evidence of public activity, but how it has shaped public activity. For instance, members of the public have actively embraced fitness trackers for personal information, sharing online and even for receiving discounts on life and health insurance (Tedesco et al. 2017). The growth of self-tracking using digital technologies has a number of significant implications for individual and collective health, and more broadly for the divide between public and private, as private information is made public, and private companies may benefit from the public’s embrace of such technologies (Lupton 2016). The global nature of such developments also has an impact on the ways in which publics are made and remade. The internet has broken down some geographic barriers while inadvertently strengthening others. For instance, vaccine hesitancy among certain ‘real world’ middle-class social groups has been supplemented and strengthened by anti-vaccine evidence found online and deriving from sources from around the world. Thus, pockets of unvaccinated populations have emerged in some American middle-class neighbourhoods, making cities much more vulnerable to infectious disease outbreaks than they would be if the non-vaccinated were spread more geographically evenly throughout the region (Smith et al. 2004). At the same time, public health authorities and charities have also been able to use the internet to break down taboos or find traditionally evasive publics to make interventions. For example, recent campaigns to encourage young men to seek help for depression and suicidal thoughts have relied upon the internet both for targeted advertising and as a form of confidential consultation (Campaign Against Living Miserably, n.d.; Sueki and Ito 2015). In some ways these technologies are not entirely novel—confidential telephone lines for mental health , vulnerable gay people, children, and so on, have been widespread since the 1980s (Crane and Colpus 2016)—but they have taken new forms as technology has become more mobile and more accessible in the twenty-first century.

4 Conclusion

The development of new technologies and the spaces they create for new publics is just one of the ways in which the boundaries between public and private were redrawn since 1948.

Although there was never a fixed line between ‘public’ and ‘private’ within public health, the increased emphasis on the public consequences of private behaviours blurred the boundary still further. When combined with developments outside of the public/private dichotomy that nonetheless had an impact on collective health and public health policy and practice, it is tempting to reject these distinctions as no longer of value in attempting to understand public health. Yet, public and private remain useful concepts to think with, as they draw attention to the different interest groups operating within public health and the different political strategies at work.