Introduction

When I wrote up my thesis on the history of self-inflicted injury, I was clear in my decision not to address my long personal history of self-harm. Indeed, I wouldn’t have made any reference to the subject falling within the personal domain at all—either by association with confessional literature or a broader theoretical grounding, such as identity politics—had it not been for the assumptions of others that the topic must do. My response, prompted by my supervisor’s suggestion that any reader would ask why and how I had come to the subject, was partly a defensive measure, partly an argument in itself. By refusing to answer this question, I aimed, as I put it then, “to emphasise the way in which such classification is potentially disempowering and delegitimizing…. Rather than assuming interest only stems from self-involvement, I invite the reader to explore the way in which unpicking the category of self-mutilation can lead us to question the very nature of identity and the existence of a unified self.” (Chaney 2013, p. 25) Five years later, I stand by this statement. And yet despite this I have become interested in unpicking the topic further. What does a personal connection to my research subject say about me as a researcher? How am I to understand it, and how should a reader respond? And, perhaps more importantly, what can all of this reveal about the ways in which historical research is written and presented?

The way history has been shaped has followed a slightly different path from other areas of social science. When history emerged as a discipline in the nineteenth century, it was largely a positivist enterprise. Scholars sought to use scientific methods of enquiry to pursue what they saw as an objective form of knowledge. They tended to cast themselves as dispassionate observers of the past, uncovering universal laws of human experience (Tosh 2010, p. 177). Even in that era, there were those who questioned this pursuit, arguing for a more relativist approach: not least Nietzsche, who proposed that the object of research was invariably defined by the interests and biases of the historian (Iggers 1997, p. 8). Later the widespread influence of Postmodernism in the 1960s and 1970 s moved historical research still further away from positivist certainties, through both the philosophical theories of Michel Foucault and the narrativism emphasised by Hayden White and others. Despite this, positivism still holds a certain sway. Appleby, Hunt and Jacob have claimed that every history book today reflects the enduring power of the nineteenth-century view of a scientific historical method (Appleby et al. 1994, p. 52). Indeed, in the last two decades, there has been something of a reaction against the Postmodernist view that all truths are relative and there can be no such thing as objectivity, in history or science. Historians have proposed alternative definitions of truth and objectivity and approaches to history grounded in pragmatism and experience. Mark Bevir re-defines objectivity in history based on “criteria of comparison”; objectivity becomes a kind of critical consensus between historians (Bevir 1994). Marek Tamm similarly emphasises peer review in a description of objectivity based on a “truth pact” between historians grounded in critical analysis of evidence (Tamm 2014). From a more social perspective, Appleby et al. argue for a “democratic practice of history”, which is sceptical about the dominant views of the past but nonetheless “trusts in the reality of the past and its knowability”, while Thomas Haskell advocates “a version of objectivity”—that on which modern, western human rights are based—as a moral imperative (Appleby et al. 1994, p. 11; Haskell 1998, p. 60).

In common with the positivist approach, these new definitions of objectivity in history tend to emphasise the importance of a consensus view. Yet, if historians are just as much products of their own era as their objects of study, where does this consensus come from, and what does it mean? In this paper I argue that a consensus view is neither possible nor desirable in the writing of history. First, I explore the way objectivity has been defined within science and applied to history, in particular the ways subjective personal experience has tended to be regarded as the opposite of critical thought. However, by writing in a style that presents them as omniscient narrators of the past, historians have tended to create the impression that there is only one interpretation of a set of ideas, diminishing the impact of their critiques. In the second section of this paper, I consider the various connections historians have to their subject matter, and ask whether any researcher can be a dispassionate observer. I argue that exploring this link between the researcher and the research can develop a critical awareness of one’s own position that leads to a better quality of research. Incorporating personal material into a narrative is one way of doing so, reminding the reader that any researcher is part of his or her field of research. Yet there are also challenges in exploring personal material. Experience, as Joan Scott has recognised, can come to stand in for evidence, essentializing the very categories of identity we seek to reclaim (Scott 1991, p. 778). In identifying as a particular kind of person—a mental health service user, say—we run the risk of narrowing the field of study, focusing on a rigid conception of identity rather than the way these identities themselves have been constructed. In my final section, I seek to deconstruct the category of “self-harmer” and argue that identity politics within mental health research and practice today is useful only when it too comes under critical reflection.

What is objectivity?

Let’s begin by considering what we tend to mean by objectivity, before considering whether there is any value in re-defining it as some historians claim. In non-fiction writing, personal experience and objectivity are often set in opposition to each other. As one reviewer put it about my recent book, “At times, the book’s objectivity is undermined by its author’s closeness to the subject matter.” (Barekat 2017; Chaney 2017) But what leads us to assume these two things are related? And are the two things, objectivity and personal experience really mutually exclusive? The sense in which we use these terms today is not universal: objective and subjective meant almost exactly the opposite of their modern meanings until at least the 17th century (Daston and Galison 2007, p. 29). As Lorraine Daston and Peter Galison explain, the modern definition of objectivity has also not always been a goal of science. It is a relatively recent thing that scientists have sought to create “knowledge that bears no trace of the knower”, with objectivity serving as a kind of “blind sight, seeing without inference, interpretation or intelligence” (Daston and Galison 2007, p. 17). It was this shift in approaches to objectivity in the nineteenth century that supported an empirical view of science, in which scientists were cast as observers and compilers, a description applied from laboratory to psychiatric hospital. The obituary of one high profile Victorian psychiatrist, Daniel Hack Tuke, complimented him on being a “scientific sponge, taking up greedily whatever was presented to him and rendering it back uncoloured by any personal tint” (Rollin 1895, p. 719). This view, of objectivity as a kind of filter in the researcher, was easily adopted by historians and social scientists of the same era.

From the beginning, the idea of objectivity in science was also associated with a particular type of practice: the empirical and quantifiable. In medicine, this was based on a biomechanical model of human functioning, which increasingly came to prominence in the later nineteenth century as a “medical materialist” view of mankind (Jacyna 1982). As the American psychologist William James put it in the last decade of the nineteenth century:

Although in its essence science only stands for a method and for no fixed belief, yet as habitually taken, both by its votaries and outsiders, it is identified with a certain fixed belief – the belief that the hidden order of nature is mechanical exclusively, and that non-mechanical categories are irrational ways of conceiving and explaining even such things as human life. (James 1961, pp. 44–45)

Despite being cast as an objective “scientific sponge”, Daniel Hack Tuke also complained that the “dogmatic incredulity” of the “scientific snob… may betoken ignorance, not knowledge” (Tuke 1884, pp. vii–viii). Yet, by the early twentieth century, this view of the science of humankind as biomechanical and quantifiable through observation was widely held. Indeed, this notion of medicine—including mental health—remains popular today.

Of course, there have been many challenges to the idea that scientists, as well as historians, can somehow transcend the world in which they live and work. In their seminal ethnographic study of laboratory life, published in 1979, Bruno Latour and Steve Woolgar explored the way in which statements and ideas resulting from the daily activities of working scientists became accepted as “facts”, concluding that these facts had their own history of social construction (Latour and Woolgar 1986, p. 107). This did not mean that scientific ideas were not important or valuable. It simply reminds us that facts are produced by people, based on differing relationships between these people and the things they study, and may be contingent on a wide variety of variables. These even include the existence of the laboratory itself as a place of communication. It was not until the mid-nineteenth century that scientific life was reorganised around communication. The creation of a scientific network through which it was assumed a professional consensus would emerge altered the meanings attached to objectivity. Within this community emerged the understanding that a uniformity to scientific results was desirable; what Lorraine Daston calls “aperspectival objectivity” (Daston 1992, p. 600).

So how new can a definition of objectivity in history based on professional consensus be? Such a redefinition of objectivity sounds somewhat reactionary, running the risk of creating a history that supports the status quo rather than generating any really new ideas. Indeed, Dipesh Chakrabarty views a retreat into objectivity in recent history-writing as a response to the clash between history and the cultural politics of recognition. For some oppressed peoples, this might lead to a rejection of the discipline of history itself, for “historical objectivity is not always to be found on the side of justice” (Chakrabarty 2007, pp. 80–85). Anyway, what we refer to as objectivity in a research context more often refers to a writing style than a position. We can measure the ways someone shows that they are objective, which is not necessarily the same thing as the way they research or think. A good example of this presentation of objectivity is the development of the case study, which became increasingly standard in medicine in the mid-nineteenth century, around the same time that objectivity became central to science. The author of a case study used certain techniques to present himself (these studies were almost always written by men) as a distant—and thus neutral—observer of a subject. Victorian case studies began to follow a number of conventions in order to emphasise this notion of objective fact over narrative, emulating the classificatory approach of the natural sciences (Hurwitz 2006; Nowell-Smith 1995). This included writing in the passive voice and avoiding personal pronouns, while references to measurements—age, time or other numerical data—also served to imply a distance from the messy complexity of the actual surgical or medical process (Nowell-Smith 1995, p. 57).

And yet a case study was always a personal account of a particular patient a practitioner had treated—a person they may have known over a long period. Take nineteenth-century psychiatric diagnosis, which frequently relied on a patient’s reporting of subjective symptoms: descriptions of experiences which others classed as hallucinations or delusions, or extreme emotional states that might or might not be visible through behaviour (Andrews 1998). The reports of other people were also considered important, both for the certification of a person as insane (an entire section of the medical certificate consisted of “facts indicating insanity communicated by others”), but also following admission to hospital, when an interview with a relative or close friend was generally recorded in case books (Suzuki 1999). Indeed, one distinction between psychiatric and other medical case histories is the elevated role given to reported speech in the psychiatric history: the patient was allowed an explicit voice in both narrative and diagnosis. Despite this, the narrative structure of a psychiatric case history in the Victorian era increasingly conformed to the highly stylised, retrospective accounts found elsewhere in science and medicine, “ordered by knowledge of the ultimate outcome” (Hurwitz 2006, p. 221). In other words, a person’s admission to an asylum was presented as inevitable, even if that inevitability was only visible in retrospect. This can be the case in patient accounts as well as medical ones, which reminds us how much such narratives are shaped by the stylistic conventions of autobiography as well as science. In 1880, an “Autobiographical Letter from a Patient” was published in the Journal of Mental Science by Bethlem Royal Hospital superintendent George Savage. The six-page letter, written by a patient before his discharge from Bethlem, was allowed to stand alone, with no commentary or analysis whatsoever. Yet as with other case studies, the retrospective account formed a linear narrative, giving the reader the impression that every element of the story contributed to the end result: the patient’s hospitalisation for illness. His narrative thus became a moral story of development, as well as a medical one of disease. Indeed, moral and medical are almost impossible to untangle in the account, for the patient noted that he “always imagined himself to be enjoying [perfect good health] …, which could never have been the case whilst I was leading such a wild life” (Savage 1880, p. 388). The suggestion that good health could not be compatible with a morally dubious lifestyle reminds us of the difficulty of separating “medical” from “moral” in any case study, for notions of what is right or wrong colour the ways in which behaviour is interpreted and narratives structured, something that affects the writer just as much as his or her subject. A story requires an ending, a sense of closure that, as Hayden White puts it, is also a demand for “moral meaning” (White 1980, p. 24).

Like medical case studies, modern research histories are also narratives, in which a set of ideas is ordered to make a point or create an outcome. As with patient case records, they are written in a stylised manner: often in the third person, usually in the passive voice and in the past tense. This style of writing creates the “appearance of a dispassionate approach”, which sits at odds with the awareness of many historians that “[h]istorical interpretation is a matter of value judgements, moulded to a greater or lesser degree by moral and political attitudes” (Appleby et al. 1994, p. 246; Tosh 2010, p. 190). In addition, historical texts include other “signals of factuality”: footnotes, bibliographies, quotations and other forms of evidence, such as charts, tables and figures (Tamm 2014, p. 276). In his recent history of self-harm in Britain, Chris Millard writes in the present tense, arguing that historians’ tendency to use the past tense creates a false sense of distance from the period under study, further implying that history is a set of discrete and concrete facts about the past while failing to acknowledge (as in the medical case history) that these facts have been selected by a writer to shape a particular narrative written in, and reflecting, the present (Millard 2015, pp. 7–11). Histories are about the period in which they are written as well as the time they are ostensibly about, and framed by a set of contemporary views about how the world functions. Yet, although this perspective has been accepted by many historians since at least the late twentieth century, most of us nonetheless write ourselves out of history.Footnote 1 This subtly changes the kind of history we write, implying that we are narrators of neutral facts, rather than actively engaging with, and often challenging, the universal norms widely accepted in the modern era. Beverley Southgate calls this the “disease” of history, which leads to a necessarily conservative view of both past and present. If historians are engaged in making “connections with the past in order to illuminate the problems of the present and the potential of the future” (Appleby et al. 1994, p. 10), as many of us believe, then the way in which we have come to those problems is important in itself. Claiming objectivity, however we define it, prevents us from considering this approach.

Is history always personal?

“Historians of the recent past”, writes Barbara Taylor, “often witness its remnants disintegrating around them; sometimes they even participate in this process.” (Taylor 2011, p. 193) She goes on to describe the closure of Friern psychiatric hospital, alongside her memories as a patient during the asylum’s final days. To some extent, personal involvement is likely in much history of the later twentieth century and beyond. Does it make a difference when a researcher has lived through the period about which they’re writing? And what if they have played an active part in this history? Jeremy Popkin notes that the “largest single group” of historians who have published autobiographies are those “whose lives were directly affected by the great dramas of the mid-twentieth century”, including the Second World War, forced emigration and the Holocaust (Popkin 2005, p. 8). In these instances, the historian becomes a witness to major events, and Popkin suggests that their critical training may enable them to more successfully put their own lives into wider perspective and context than other narrators (Popkin 2005, p. 6). When historical research largely focused on political narratives, “experience of public life was widely regarded as the best training for historians”, while wartime service was thought to sharpen the insights of those working on the history of political diplomacy (Tosh 2010, p. 168). Thus, in some cases, personal experience has been considered a valuable element of histories, although usually not explicitly voiced as such. In the examples cited by Tosh, experience becomes an unacknowledged background skill, improving the historian’s analytical ability. The historian’s autobiography, meanwhile, is usually published quite separately from his or her academic research. Taylor, however, used her own experiences as part of her argument, a source alongside the other primary material she analysed to criticise the failure to replace the asylum system with any useful alternative for service users and to expose the way an “anti-dependency mantra” emerged in modern mental healthcare (Taylor 2011, p. 201). Might she have come to the same conclusions without any personal attachment to the subject? Certainly, her other sources suggest this was a viable argument. Yet her visible political perspective alters the relationship with the reader. It demands an engagement, rather than a detachment, that is less likely to occur in a text that declares itself neutral, drawing on the ability of autobiography to “connect ordinary human experience and deep theoretical questions” (Popkin 2005, p. 9).

This connection to the social and political goals of research is not only the preserve of historians. It is often the case that the objects of our study have a personal investment in their research and theories. When I first came across the work of American psychoanalyst Karl Menninger as an undergraduate, I found the determined force with which he set out ideas at odds with modern understandings of self-inflicted injury unsettling, even ludicrous. In Man Against Himself (1938), Menninger’s view of self-mutilation was extremely broad, conflating acts as diverse as castration, hair-plucking, nail-biting, “purposive accidents” and even the experience of illness at all; something so different from the modern psychiatric category that I found it hard to take seriously. When I returned to this work in my research on the history of self-harm, however, it was Menninger’s rhetorical style and his florid comparison of anecdotal and fictional examples with real cases he had treated that intrigued me, making me want to try and better understand his position. How had such a style of writing become popular, even respected? Some of Menninger’s contemporaries did ridicule his colourful turns of phrase, and anecdotal examples (Hale 1995, p. 84). Yet his first book for a general audience, The Human Mind, became the best-selling psychology book of its era (Menninger 1995, p. 99). It wasn’t, either, as if Menninger tried to hide his personal agenda. He certainly did not adopt the classic style of objectivity. Alongside his claims for a universal understanding of human experience, Karl Menninger advocated social change, publicly declaring that scientists and researchers held the ideal position from which to create a better world (Menninger 1942, p. 6). His volume on suicide and self-mutilation, Man Against Himself, was published in 1938, when mass conflict threatened the world for a second time. In the same year, Menninger gave a lecture to the Herald Tribune Forum in New York, on “Some Observations Concerning War from the Viewpoint of a Psychiatrist”, in which he definitively claimed that “what suicide is for the individual, war is for the nation” (Menninger 1959).

Karl Menninger’s overarching aim was to explain the conflicts between nations and social groups through the psychology of individuals. Man Against Himself is sometimes less about the individual acts of people who injure themselves, and more an effort to explain the major political problems facing the world. This link between understanding individual psychology and explaining national and global concerns was not unique to Menninger’s time. In the 1970s, for example, American psychoanalysts and social critics tried to explain a perceived national decline through the concept of narcissism (Lunbeck 2014). Today it remains common to try and understand political change through similarly individualised or psychological approaches: “Is Trump mentally ill?” Asked the Washington Post in 2017, “Or is America? Psychiatrists weigh in.” (Lozada 2017) The assumption in the article is that one of these things must be true. Yet the connections made here reveal more about the time in which a person is writing and the concerns of the society they live in than they necessarily do about the people or things they attempt to diagnose or explain.

One of the most useful elements of exploring historical ways of thinking about health and illness is that the views put forward are so alien to us that it is more immediately obvious they are shaped by the context in which a person is writing. Yet the norms that shape our own era—and us as researchers—are often so taken for granted as to be almost invisible. So can a historian ever be a neutral observer? More to the point, should they even aspire to be? Acknowledging the personal connection between the historian and their object of study has been variously suggested as a moral imperative (Haskell 1998, p. 60) and a democratic reflection of diversity (Appleby et al. 1994, p. 3). Yet, sadly, one response to the relativism of postmodern approaches over the past few decades has been for many historians to retreat to the rhetoric of the neutral observer, able to “renounce any standards or priorities external to the age they are studying.” (Tosh 2010, p. 193) As John Tosh notes, this lofty goal is unattainable, and may even be damaging to the historical process: if the historian comes to believe they truly are neutral, they will lack the self-awareness to be critical of their own assumptions and values. E.P. Thompson “made no secret of his sympathies—even acknowledging that one chapter in the Making of the English Working Class was polemic”, and thus “the confessional mode of historical writing should be welcomed” (Tosh 2010, p. 207). Even Thompson, however, wrote himself out of the history he created, never addressing his “own role in determining the salience of certain things and not others”. This, Joan Scott states, resulted in the opposite of his stated aim: rather than historicizing the category of class, he ended up essentializing it. Thompson encourages the reader to forget that his history was “a selective ordering of information” and instead present the experiences he recounts as objective, making class appear to be “an identity rooted in structural relations that pre-exist politics” (Scott 1991, pp. 785–786). If a historian explains how she or he came to a subject, then, it may allow both historian and reader to gain a better critical appreciation of the topic.

The difficulty in practice is that the confessional has long been bound up in issues of power relations. Certain kinds of knowledge, and certain ways of presenting knowledge, have been and are considered more reliable than other kinds of knowledge. In the nineteenth century, white, middle class Victorian men believed themselves to be more objective than other groups because they assumed they exhibited the highest stage of mental evolution (Kuklick 1991, pp. 82–85; Stocking 1987, p. 225). They were more “rational” than women and so-called savages and, in addition, they were the most likely group to possess the “altruistic sentiments” that enabled them to understand humanity as a whole and thus to make generalisations about human life and experience (Dixon 2008; Spencer 1870, pp. 578–627). Perhaps more tellingly, these men also tended to assume that their engagement with professional communities allowed them to “subordinate their own self-interest” to shared goals (Haskell 1998, p. 58). It is no surprise, then, that feminist history has regarded such claims to objectivity as “ideological cover for masculine bias” (Scott 1991, p. 786). Even the sources used by historians form part of this structure, belonging as they do to “certain relations of privilege” (Chakrabarty 2007, p. 85). Some stories are more likely to be found in archives and collections than others. Even those that are may appear in mediated form. When we read a historic psychiatric record, we are taking on an account that assumes the patient is an unreliable narrator, and that his or her words need interpretation to extract rational meaning.

The long history of setting objectivity, rationality and professionalism squarely against the personal and emotional may be why some modern historians feel—as I have done in the past—uncomfortable or defensive about mentioning the personal dimensions to their work. This is particularly the case when these experiences sit outside the bounds of what is socially accepted as normal, such as experiences of mental ill-health. When Jeremy Popkin’s father, historian of philosophy Richard H. Popkin, decided to publish his autobiography he feared that “candid accounts of his strong religious impulses and his struggle with manic depression [would] undermine the credibility of the scholarship to which he had devoted his life” (Popkin 2005, p. 4). Although, in this instance, the confessional formed an entirely separate realm to the professional, Popkin nonetheless feared that this confession would “taint” his other work with irrationality. Similarly, in the acknowledgements to his recent book The Age of Stress, Mark Jackson becomes almost apologetic when he mentions the personal genesis of his research.

The argument presented here is, therefore, in some ways merely the rational expression of a deeply intuitive, and perhaps deluded, quest for psychosomatic health and stability… Over the last year or so I have endeavoured to heal, or at least conceal, the fault lines that temporarily fragmented my life and work. Any remaining flaws in the fabric of this book are the product of my own limited resilience under stress. (Jackson 2013)

Jackson’s phrasing implies that his personal quest for stability was somehow at odds with his academic credentials (the intuitive is “perhaps deluded”), even asking the reader for forgiveness. Popkin, meanwhile, eventually decided to include an “honest” account of his experiences in his autobiography.

This threat of lost credibility really refers to the reader and not the writer. The assumption is that, in finding out something about the writer’s mental health, a reader may re-evaluate other, seemingly unconnected, aspects of the writer’s work. When recounting her experiences of self-harm, Sharon Lefevre wrote that: “My aim is merely to endorse the experience as being ‘real’ and evidence of my ‘truth’. The evaluation of this book, however, can only be validated by your agreement to believe in my ‘truth’.” (Lefevre 1996, p. 6) As Lefevre made clear, her experience was one of many. Yet she also acknowledged the pact between herself and the reader. If the reader rejects her “truth”, her experience is meaningless because it cannot be shared. On the other hand, if the reader accepts it, her story becomes more than a personal narrative. As Mark Cresswell puts it, such testimony can “itself be considered a branch of self-advocacy” (Cresswell 2005, p. 10). The personal narratives of self-harm survivors are powerful re-tellings of stories founded in direct experience, which may give rise to specific programmes of activism: highlighting oppressive practices within psychiatry and/or offering advice as to improved service provision (Cresswell 2005, pp. 10–11). As we see in Barbara Taylor’s work, it is perfectly possible for this to take place in the context of broader historical research as well as within memoir-based testimony. Yet as Joan Scott notes, the discursive nature of experience and the politics of its construction are at issue here for “[w]hat counts as experience is neither self-evident nor straightforward; it is always contested, and always therefore political” (Scott 1991, p. 797). This brings us on to the final topic for consideration: the link between personal research and identity politics.

Am I a self-harmer?

For the sake of argument, let’s accept that no researcher can be objective, in the sense of being entirely detached and dispassionate from their object of study. We nonetheless seem to assume that some researchers’ perspectives are more obvious to the reader than others. I remember as an undergraduate in the late 1990s being encouraged to spot the Marxist historian (and, later, the Foucauldian): perhaps in the assumption that these types of writers were somehow more visible than other kinds. A vaguely liberal or conservative approach might be less immediately apparent. After all, in the twentieth century, neoliberal capitalism has been widely accepted as commonplace, and become a set of norms from which everything else departs (Fisher 2009). In addition to this, there are certain kinds of research subject that invite the assumption of personal interest. I was surprised how direct some of the responses from my fellow researchers were during my PhD. “How did you become interested in the topic?” I was asked on more than one occasion. “Are you a self-harmer?” The same sorts of questions don’t seem to be asked of someone researching, say, the history of ovarian surgery or public health policy. So not only, it appears, are researchers not objective, but there are also certain kinds of topics about which it is assumed that research is personal. Our reactions, of course, might say as much about our fear of being unmasked as survivor researchers as they do about the person asking: a fear of losing credibility like that voiced by Richard Popkin. Did these people genuinely expect me to admit to a personal connection or did I simply read the question in that way because I knew the answer? Perhaps they merely wanted reassurance that I was not personally affected by the topic, or that I was talking about a friend, family member or former patient. One colleague even assumed, for no reason I could gather, that I must formerly have been a mental health nurse.

So am I a self-harmer? Well, what is a self-harmer anyway? This category of person was first added to the Oxford English Dictionary in 2006, with the earliest use of the term dated back to 1980 (‘self-harm n.’ 2017). Self-harmer, we are told, is a derivative of “self-harm”: a person who inflicts deliberate self-injury on themselves. On the surface, this seems like a fairly obvious description. A researcher is a person who researches. A coffee drinker is a person who drinks coffee. Fair enough. But self-harmers are objects of scientific and medical enquiry in a way that researchers and coffee drinkers are not. The noun self-harmer emerged from the earlier psychiatric concept of self-harm; the OED goes on to say that self-harm occurs “esp. as a manifestation of a psychiatric or psychological disorder”. Thus the description of someone as a self-harmer leads immediately to other assumptions about them. Of course, we might also make generalisations about a coffee drinker, not least the assumption that they like coffee. But this is quite a different category from “self-harmer”. A coffee drinker is not, or not usually, a “kind” of person, while a self-harmer fits more closely Ian Hacking’s notion of “making up people”: a scientific category (self-harm) brings a new kind of person into existence (Hacking 2007, pp. 285–286).

You might argue that this is not a new thing. We can certainly find earlier examples of people who injured themselves being referred to as a type of person. In 1906, psychiatrist George Savage spoke of “the Self-Mutilator”: a class of “girls” who were “allied to the hysterical”, not insane but “on the borderland of insanity” (Savage 1906, p. 490). Yet most people defined in this way at the turn of the twentieth century would not have been aware they were part of any such group at all. Meanwhile, American ophthalmologists George Gould and Walter Pyle coined the term “needle girls” in their 1897 book, to describe a “peculiar type of self-mutilation… sometimes seen in hysteric persons” of “piercing their flesh with numerous needles or pins.” (Gould and Pyle 1897, p. 735) the small number of cases these two doctors drew together ranged across continents and a period of over 50 years. Unlike the “fasting girls” of the same era—who were reported widely in the press, and had a long history of religious significance before psychiatrists became interested in them (Brumberg 1988)—these needle girls were not so much a “kind of person” as a retrospective categorisation. We can see the same with other medical categories of self-mutilation: the “motiveless malingerers” described in the British Medical Journal, or even the “Ultramontane girls of the Continent”, among whom The Lancet claimed in 1874 that “stigmatising” (the intentional creation of wounds resembling stigmata) had become “a trade” (Louise Lateau 1871, p. 604, Motiveless Malingerers 1870, pp. 15–16). None of these groups would have interacted with the ways in which they were categorised, as it is highly unlikely they even knew about the category at all.

In contrast, self-harmer means something more to a modern, non-medical audience, used to understanding the world through highly specific identities. These may be, but are not always, related to interaction with these categories. Hacking touched on this in his work on “making up people”: high-functioning autism, for example, was not a way to “be a person” until the first people who had been diagnosed as autistic (which itself could only happen after the introduction of the diagnosis in 1943) “recovered”: “a few of those diagnosed with autism developed in such a way as to change the very concept of autism” (Hacking 2007, pp. 304). From the 1990s onwards, some of these people adopted autism—or alternatives such as neurodiverse or non-neurotypical—as an identity. Similarly, other groups began to adopt and critique identities that had previously been perceived solely through a medical lens: the disability rights movement, for example, as well as mental health groups such as Mental Patient Unions and Survivor groups (Cresswell 2005; Gallagher 2017). Yet by coming together through identities based around specific experiences, these activists also run the risk of reproducing the very system they seek to protest against. As Joan Scott puts it, their experiences become incontestable evidence that “weakens the critical thrust of histories of difference” and prevents us exploring how such differences are established and operate (Scott 1991, p. 777).

In the late twentieth century, this type of activism became known as identity politics. As a term, identity politics was first used by Anspach in 1979 and became widespread in the social sciences by the 1990s to describe a huge range of different forms of activism, often focusing on race, gender, sexuality or disability (Bernstein 2005, p. 47). Again, one of the many criticisms levelled at identity politics has been that it essentializes categories, ending up reproducing normative structures. Yet, as Mary Bernstein puts it, “a shared collective identity is necessary for mobilization of any social movement” (Bernstein 2005, p. 59). Expressions of identity can be deployed as a political strategy as well as being a potential goal of activism (for example, legitimisation of stigmatised identities such as “mental health service user” or “self-harmer”). Moreover, many kinds of identity-based activism have become research fields in their own right: gender, Queer, Black, disability and, most pertinently for this article, Mad studies. A two-part edition of Asylum: the magazine for democratic psychiatry recently declared that Mad studies had “come of age” in the UK. In an introductory piece, Helen Spandler explored the connections between Mad studies and Queer studies. Both critique culturally-accepted ways of being “normal”, as well as the practices that draw on a binary model of humanity (normal/abnormal, gay/straight), with “a shared vision for a wider transformation of society” (Spandler 2017, p. 5). This notion contradicts many of the criticisms levelled at so-called identity politics, across a wide range of different types of scholars, who have tended to assume that by focusing on identity it tends to polarise particular groups, detracting from the possibility of universal social change (Bernstein 2005, pp. 49–55; Rose 1996, p. 39).Footnote 2

Yet one challenge in applying the ideals of identity politics to research is that it can become viewed as necessary. A few years ago, I presented an extract from my research at a history of psychology conference. The first question was a hostile response from someone who had already identified themselves as a service user: “why are you studying self-harm? Is it because it’s a trendy topic now?” This is something of a reversal of the previous reactions I’ve mentioned: rather than assuming that someone has a personal connection with a topic, there is the expectation that they ought to have. And further, if this connection is not made explicit, that it detracts from the research in some way, or is a personal failing in the researcher. Yet voicing an identity is a position or argument in itself. In addition to the confessional nature of the admission, which may not always be useful or relevant, it creates a sense of expectation and raises additional questions. How do I decide how to describe myself? If I admit to personal experience, does that require the acceptance of an identity category? Am I a self-harmer, or a person who has self-harmed? A service user, or someone who has used mental health services? And, beyond this, answering questions with identity or experience does not necessarily address more important issues, such as how I would situate myself in relation to psychiatric diagnosis, medication or the categories that my research critiques.

I might contend that I am simply a researcher who happens to have direct personal experience of my research subject. Lived experience has, after all, become a neat way of describing our relationships with mental health, while avoiding essentializing medical categories. Yet this too can be problematic. Jijian Voronka writes of the risks of “strategic essentialism” in considering people “with lived experience” (of madness or the mental health system) as a category.

By subsuming all of the ways in which we have made sense of our experience (mad, psych survivor, mentally ill and so on) under the umbrella of ‘lived experience,’ we risk conflating distinct ideological and conceptual explanatory models under the apolitical, liberal and user-friendly language of ‘lived experience’. (Voronka 2016, p. 196)

Even a diagnosis can be something comfortable to hide behind. It is easier to admit to being “mad” than to doing mad things; easier for me to accept the label of “self-harmer” than to admit to inexplicably putting my fist through a bathroom window in the middle of a party. We also risk reproducing other power structures: lived experience becomes largely white, middle class experience (let’s face it, researchers today are still far more likely to fit this profile than to be from an ethnic minority or working class background). (Voronka 2016, p. 197) The category of lived experience even diminishes the political power of labels like “mad” and “survivor”: “if we break free from identity categories entirely… how do we make political demands, such as the demand for rights or services?” (Spandler 2017, p. 6) As long as we critique them, then, identity categories may be useful just as exploring our own experiences as researchers may be valuable.

Conclusion

As I have outlined in this paper, there are problems in incorporating specific aspects of our own lives and experiences into the research process. Doing so, especially in the field of mental health may result in a loss of credibility, or result in the assumption that this is the only possible or valuable way of approaching a topic. Identity and personal experience is just one element of the research process, and not necessarily the most important one. It may draw us to a subject in the first place, and shape our approach. It may also create assumptions as well as leading us to challenge them. When I began my research on self-harm, I held a number of preconceptions based both on my personal experiences of psychiatry as well as the work of other historians. It seemed obvious to me that the emergence of a category of “self-mutilation” in Victorian psychiatry was part of a drive towards classification within mental health care at the time, based on a pessimistic and determinist biological model of mental illness.Footnote 3 I was surprised to find that, of the small group of practitioners writing on self-injury in the Victorian era, many argued against this view. Instead, they read self-harm as evidence that mental health and illness were part of a continuum (Adam 1892, p. 1148). By combining a number of different behaviours which varied in severity of physical outcome—from hair-plucking to limb amputation—under one term for the first time, they assumed a commonality of experience, extending this out to the “nervous, fidgety, restless habits” that “less perhaps in magnitude, are common among nervous people who are not insane.” (Adam 1892, p. 1151) The connection drawn between common behaviours and the more extreme damage of self-mutilation meant that some of these doctors determined to seek a broader explanation, leading them to social and environmental models of mental health rather than a biologically determined model of mental illness (Savage 1886, 1891).

By confronting my own assumptions critically, this example also highlights the benefits that reflexivity has for the way we approach, write and understand history. While many historians still cling to notions of objectivity, or try to re-define the term to create a new empirical approach to the topic, this approach serves mainly to perpetuate the myth that there is only one history, and that the historian is a truth-teller of the past. History is more valuable when it critiques, rather than reinforces, our modern perceptions. Our present is just one of myriad possibilities that have arisen from a certain set of circumstances: it is not inevitable and, in many ways, it may not even be desirable. By exploring how ideological systems have emerged we can contest them; by considering the relationship between language and identity we can better understand how identities have been constructed, and by analysing how personal experiences have been shaped in particular moments in time, we can guard against essentializing our modern categories. The lesson here is that we need to be just as critical of the aspects of our research that stem from personal experience as those that draw on the work of other researchers. While this is an established process in social science research, through the methodological approach of reflexivity, this is not usually the case in history. Identity is not a shield to hide behind, but something to acknowledge and explore. And the way we acknowledge it is just as important, lest we unthinkingly create new “kinds of people” to replace the old, medically defined kinds. The most important step forward is to break away from the assumption that research is or should be objective. Why should we not, as researchers, be upfront about our position within our research? Anecdotes and personal viewpoints, clearly acknowledged, are not simply about readability but about reflection, a need to be critical of ourselves as much as the other objects of our studies, and an invitation to the reader to extend the same critique to our views. Creating new approaches to knowledge production is as much about the style in which we write, as it is about the knowledge itself.