Introduction

Salami Slicing is a term I first heard after starting my first postdoctoral research position. Having completed my PhD and successful Viva only a few months previous, I was anxious and excited (not to mention driven) to begin the process of developing a set of strong, and relevant publications off the back of my PhD, and start building my publications record. I had soon submitted my second publication proper (the first publication being submitted prior to completing the PhD thesis). This second paper comprised a qualitative study of oilmen’s different understandings of their masculinities, and how these relate to offshore safety and risk practices; a manuscript that clocked in at a very lengthy twelve thousand words and was wholly based upon data collected during my PhD. Upon hearing back, quite rapidly, from the journal, I was soon advised to revisit this work and consider splitting the manuscript into at least two separate studies. This presented an interesting quandary for me, in the context of having recently heard about Salami Slicing.

At the end of the following month, after many late nights of revising and editing, and weekends spent re-examining the collected ethnographic data and field notes, I finally found myself with two papers; both based on the same qualitative methodology, ethnography and data—yet each aimed at distinctly different audiences, focussing on different parts of the ethnography and investigating and drawing different conclusions and challenges against different disciplines and bodies of literature (one for psychology, one for sociology). One paper remained with the original theme relevant for a psychology journal focussing on safety and risk. The other was for a sociological audience and critiqued a popular identity theory using evidence of men’s identities and behaviours gathered during ethnography. I duly submitted the two articles to the relevant selected journals and breathed a sigh of relief.

The following weekend, in a conversation over coffee with one of my former PhD peers, our topic of discussion drifted onto publications, and then, later into discussing Salami Slicing. It transpired my friend had planned for a set of publications from their also recently complete PhD but had decided to curtail this to likely only two: one focussing on the methods and one focussing on the outcomes. When I asked them why this was, given my knowledge of the breadth of longitudinal qualitative data they had collected and their previous plans for several more publications, they replied that while they recognised the potential for developing more papers highlighting nuanced and relevant contributions to different interdisciplinary fields, and currently emerging debates, which could easily form distinctly different scholarly works—concerns over Salami Slicing were holding them back. As our conversation continued, and they continued to describe, with then more knowledge than myself, their interpretation of Salami Slicing; the different aspects and considerations, the seemingly endless pitfalls and ‘hard’ and ‘soft’ rules, I began to suddenly become very concerned for my recently submitted papers. These were both using the same body of qualitative data, collected in the same time frame and using the same methods. Further, I had used some of the same participant quotations in both papers—was this OK? They did not know.

That evening I arrived home in a state of panic. Finally, after what seemed like an entire night awake endlessly deliberating; much of it spent reading everything I could about the seemingly endlessly talked about, yet almost never concretely explained Salami Slicing, I emailed the editors of both journals early the next morning, explaining my concerns and requesting manuscripts be withdrawn. The responses came later, the following day. Both were extremely courteous and polite, yet also seemed confused at my withdrawal request—suggesting this was needless. However, both editors also conceded to this, and I breathed a long sigh of relief for not having (albeit unknowingly) possibly breached any ethical guidelines for publication. I was now left with two papers; withdrawn and nothing currently under review.

Over the next few days, and then weeks, I continued to read in much more detail about Salami Slicing, deviating from the ‘harder’ science literature that had pipped much of my panic and delving deeply into the journal-specific social science guidelines and checklists, sociological publications and online debates. To cut a long story short, I should never have withdrawn the two publications. In the following sections, I first explain—with a clear definition—the term Salami Slicing and discuss its relevance specific to the social sciences. I then present three common case scenarios that appear to frequently lead to confusion, and were the most cited questions on many online ECR discussion groups I visited. By elucidating upon the term from a purely social science perspective and clarifying some of the most salient points surrounding Salami Slicing; the aim of this paper is to provide fellow ECRs with a clear understanding of Salami Slicing, what this looks like when contextualised for qualitative research, and how to avoid this. Most saliently, this publication aims to encourage ECRs, working in the social sciences, not to needlessly withdraw publications or hold back on publishing relevant, topical and timely publications where appropriate, for fear that these may fall under a blanket Salami Slicing label, when in fact some of these concerns may not be immediately relevant, provided that the proper precautions are taken.

Salami Slicing: a definition for social science

The term Salami Slicing is used to describe a number of fraudulent practices that range from psychological manipulation (see Schelling 2020; Kipgen 2021): hacking, confidence tricking and theft. However, in academic circles, the term has risen in prominence, beginning in the hard sciences, to describe the practice of splitting a single academic study, which could easily be published as a single body of work, into multiple publications or ‘slices’ that while differing little, can be spread across multiple journals to inflate the author’s publication record and rank. The term Least Publishable Unit (LPU) (sometimes also Minimum Publishable Unit—MPU) is often associated with Salami Slicing (which I will from hereon in refer to as SS). This is a fitting descriptor, as the goal of the academic unethically engaged in ‘active SS’ is to ascertain the minimum ‘unit’ of knowledge that must be transposed from a study into a publication so that this can be published as a stand-alone paper. Once this process is replicated for an entire study and all LPUs have been mapped and written up into different publications, appearing as separate studies, the result may be many papers showing different sections of results, yet articles will invariably contain a similar introduction, near identical methods and near identical conclusions—as they all belong to the larger overarching study from which they have been ‘broken off’ from. This is—at its essence—the concrete and ethically unacceptable definition of SS.

The primary academic concerns for and impact of Salami Slicing originate within hard science studies. For example, Urbanowicz and Reinke, (2018) present a fascinating, recent and thorough discussion of salami-slicing practices within an ecology and evolutionary biology context. They first suggest academic authors offer diverse justifications for multi-publications from single datasets. These include the natural structures of large projects, sometimes with multiple endpoints and outputs, journal word-count limitations and projects addressing multiple research questions (see p. 2). However, the authors largely point to publication overlap representing a deliberate practice for reasons of publication inflation in a highly competitive and publication-focussed academic world. Urbanowicz and Reinke go on to discuss how the separation of findings from a single dataset into multiple publications damages the quality of research published. They highlight concerns surrounding how multiple publications force readers to sort, identify and evaluate the novelty and validity of each ‘block’ of publication results to collate a coherent, collected picture of the original overarching research to accurately evaluate the importance of findings (see p. 3). In addition to the time costs to researchers associated with this, they also posit negative scholarly implications for researchers developing meta-analyses: in particular, complications over ascertaining from published results the frequency of novel research findings within ‘split’ studies. This is not an isolated consideration and has been highlighted by other researchers. For instance, Spielmans et al., (2010) explore SS as linked to pooled studies of antidepressant medications. The authors present an engaging discussion surrounding publications exploring the antidepressant medication Duloxetine, including an arrangement of reviewed studies, study focus, included clinical trials and concluding findings (see pp. 100–103). Fascinatingly, the authors highlight multiple instances of SS, demonstrating evidence of multiple publications from same datasets. They give examples of data split to report outcomes of medication efficacy and safety per gender and different ethnicities, with publication outcomes often reporting no reportable differences (see p. 99). They go on to discuss other examples of pooled analyses examining safety and tolerability that frequently suggest no notable concerns, yet branch data into multiple publications with minimal investigatory changes based on a single study protocol.

Like Urbanowicz and Reinke, Spielmans et al., discuss how ‘Salami Sliced’ publications impact the readers of scientific findings. Notably, they raise the point that much duplicate publication data could, and should, be collated into a single publication, as opposed to spreading this over (in one case) three separate publications. They also suggest that: “[…] researchers should be aware that salami publication wastes valuable resources of editors, reviewers, and journals” (p. 103). Finally, scholars highlight how the practice contributes to academic bias. They posit:

[…] salami publications may be more representative of propaganda than of actual contributions to science. The fact that such redundant publications have appeared in a wide variety of medical journals raises questions about the quality of peer review and what passes for ‘original’ science (p. 103).

However, Spielmans et al. are also careful to point out that the definition by which they investigate SS is prone to fluctuations, and a clear definition of SS not universally understood across all scholarly disciplines. They clarify.

What may be considered redundant information by some may be considered an important scientific contribution by others. Thus, we acknowledge that different evaluators may draw different conclusions regarding whether these publications were appropriate. However, we believe that publishing similar outcomes from the same dataset of publications on several occasions better serves the curricula vitae of researchers and, potentially, goals of drug marketers, than it does science or patient care (p. 103).

The above considerations regarding SS are highly relevant and important, and well highlighted by the authors. As Spielmans et al.’s comment suggests, the methods by which SS, as outlined above is detected have led to SS being interpreted as meaning different things at different times.

Most modern existing research discussing SS does so by making reference to a past prevalence of SS within the hard sciences (see also: Baldock et al. 2021; Gray et al. 2021; Gregory and Leeman 2021; Werner 2021). From the limited publications that refer to social science perspectives (Morse 2021; Siegel and Baveye 2010), most of these take a similarly structured stance to Speilmens et al. by focussing on the decay to multi-publication legitimacy caused by SS. The result is an overall academic hyperfocus on detecting SS through the repeated discussion of ‘tell-tale’ signs. Invariably, detection focuses on three points: 1: each publication should examine or test a different hypothesis, 2: two publications should not draw from the same body of data and 3: studies should not report the same results. While these points retain their relevance for some quantitative studies, which may arguably more easily—versus qualitative research, be subject to SS, applying such ‘rules’ in a blanket fashion to large-scale qualitative research data commonly collected for social science inquiry can be a problematic position. The rigid application of these rules to common qualitative findings forcibly reduces such complex matter to simple ‘data’, rendering it void of any deeper context or meaning from which differentiation may be drawn out by interpretation and analysis. Taking a social science perspective, interpretation could render the publication from same datasets, or even same hypothesis and same results studies, as having significant discursive diversities across different sociological, anthropological and interdisciplinary circles and sub-areas of interest, where audience, theory or retrospective re-examining of findings may yield important and progressive discoveries.

Exploring some salient case studies and defining what’s ‘OK’

In conversation with many of my recently made doctoral peers, and from seeking help from many seasoned academics, a unified consensus on SS in the social sciences has remained far from crystal clear. I am lead to believe this perspective is also shared by many of my fellow ECRs. Notably, in reviewing the many (probably in the hundreds) of ECR message boards postings I have pursued over the last few years, I am also convinced that many ECRs are unclear about SS in the social sciences. In the below sections, I discuss each of the aforementioned ‘detection points’ of concern for SS in the context of common social science publication queries, to highlight what (largely) represents acceptable and non-acceptable scholarly practice. These queries have risen time and again, both from myself, my peers, my colleagues at various institutions and on numerous online ECR message boards and posting forums.

What’s OK case 1: using the same dataset or publishing from a PhD

One of the most common concerns vis-à-vis SS, surrounds the ability to use the same dataset to draw multiple and different conclusions in different research papers. As has been pointed out by others (although with a quantitative focus: see Menon and Muraleedharan 2016), the appropriateness of this can only be decided upon using a case-by-case approach. The key questions that the ECR researcher needs to ask themselves is why should multiple publications be required. If this is simply for issues of journal word count, an attempt to generate two publications for improving publication count, or the convenience of being able to easily replicate experimental methodology, with only slight changes, then these represent legitimate cases of Salami Slicing, and this practice is unethical and damaging.

However, using the same dataset for multiple publications can be actively ethical, in a manner that promotes legitimate research discoveries. For example, in many longitudinal qualitative methodology studies, including PhD theses, particularly where ethnography is concerned, researchers may return from the field with a wide-ranging body of experiential, observational and interview data that can be arranged and analysed in different ways, relevant for different scholarly groups. This may relate to different yet likely interlinked subject matters and can make salient and important contributions to more than one scientific disciplinary area. Many social science scholars have developed important and influential scholarly thought in tandem with the above approach. Ethnographic methods typically allow a researcher to embed for lengthy time within a given locale and environment, and make detailed observations surrounding specific aspects of culture, rituals and norms. Upon return, it is often the case that researchers develop significant discoveries surrounding the use and development of the ethnographic method itself that warrants dissemination via scholarly publication. This is in addition to the publication of discoveries that occur relating to initial research question, in the ethnographic locale, and other, sometimes unplanned, specific discoveries of note. Further, researchers specialising in a specific field, researching a specific location, or using specific methods may employ comparative analysis between their ethnographic experience, against that of the experiences of other researchers, or their own work. While this can lead to multiple discoveries and publications originating from the same dataset, this does not conform to the typical hallmarks of SS employed to label research outside of the social sciences. These perspectives are echoed by others writing on the Salami Slicing topic. For example, Jackson et al. (2014) state that “[…] Some of the most exciting and cutting edge research is driven by doctoral students enrolled in doctoral programmes that promote or even require a series of publications” (p. 1).

Similar rationale is exampled in practice by Kirkman and Chen (2011), yet with a quantitative perspective. Kirkman and Chen present an insightful and carefully constructed discussion of the pros and cons from publishing multiple papers from a single dataset. Notably, the authors focus on their own experiences of collecting a wide-ranging body of complex data from multiple different organisations, to study the dynamics of team and leadership interactions. Importantly, they deconstruct the nature of such large-scale data-collection methods and define the practice as resulting in the collection of multiple variables (see p. 435). This can lead to multiple publications addressing different research questions, subject matter, and making distinct scientific contributions, yet these originate from the same original dataset. Perhaps most relevant for the topic of this paper is that the authors weigh the merits of developing a check-list protocol to assess the novelty, relevance and contribution of each distinct publication (see p. 436). This includes a ‘self-assessment’ of the types of theories used to work with the data, the variables under investigation and analysis, theoretical implications, and the overarching implications and contributions of the research for practice. The authors conclude that the publication of multiple papers from same dataset can, in some cases, make unique and beneficial contributions, where a systematic strategy can be utilised to ascertain the distinctness, relevance and legitimacy of the contribution.

Considering the above, it is conceivable that, under the correct conditions, distinct publications, contributing to different areas of academic study, for different audiences, aimed at furthering different studies in different disciplinary fields can be drawn, at times, from single datasets, which also use the same methods and timeline for data collection.

An important aide to the above thinking is that during any submission process of articles which contain similar sections, there should never be any actual direct duplication of write-up. For example, an ethnographic paper detailing, for example, a new methodology will likely require a lengthy methodological section to formulate a coherent argument for specific methods facilitating better access to data and increased data insights. Conversely, a paper discussing a topic such as unique observational data detailing specific sociological participant interactions may require only a brief explanation of methods, in order to retain focus on the topic of scholarly contribution and develop findings and discussion of subject, salient for this journal. If a researcher finds themselves rewriting sections in similar word form, content and with similar length, then they should return to the aforementioned question of legitimacy posited earlier, and again ask themselves why these papers are unable be condensed into a single publication. The key, as I understand it, is to be able to unequivocally justify the publication as wholly deserving of a stand-alone write-up for the new knowledge and contribution it affords to the audience and discipline. If the author is in doubt or concerned over growing similarities between two developing papers, or indeed, the validity and originality of the contribution itself, then they should attempt to merge these papers together and develop a notation list of the separate arguments. This will prove invaluable should it then be decided that these manuscripts have to be re-separated, in order to maintain the unique arguments of each publication and avoid any overlap that could be considered as the unnecessary practice of SS.

What’s OK case 2: writing for different audiences

The topic of different audiences has been covered to some degree above. However, this is a recurrent point of confusion (again, seemingly for many ECRs, as it certainly was for myself). Notably, some existing publications discussing SS appear to suggest that similar publications that use same dataset, methods and report similar findings can be published across two distinctly different research areas or disciplines (e.g. see Menon and Muraleedharan 2016). However, and most notably, this appears a very rare occurrence and one for which I was only able to find very limited evidence for. Additionally, the practice of direct—albeit rewritten—duplication of research findings appears to come with several caveats. Most significant of these is a clear differentiation in contribution between journals. For example, Menon and Muraleedharan (2016) state.

Rarely, manuscripts derived from identical or overlapping patient samples can be published in multiple journals catering to different but related professional disciplines. For instance, a manuscript on suicidal behavior can be considered for publication in journals related to sociology as well as epidemiology provided they describe different points of view. The authors must, then, explain, why they think it is necessary to present the findings in a different context (p. 1).

Relating to the above quotation, and the discussion in the previous section, the most salient differentiator between active SS and having a legitimate reason for duplicating publication rests upon the author’s ability to justify why such findings are relevant for sharing with different audiences. Exploring some further studies, differentiation can be broken down into three common points of rationale. First, authors must be able to discuss—and demonstrate with some degree of certainty—reasons why the initial publication may not reach the desired secondary audience. Second, authors should be able to discuss, and pinpoint the potential impact of reporting the findings for publication to a second audience, and contrast this with any initial probable or expected impact from a first publication, and discuss why such differences exist. Third, authors should be able to present a coherent and balanced case for why each separate academic field relates to a different audience and demonstrate significant stratification and lack of overlap between the disciplines. A final note on this topic for ECR social science researchers is the importance of transparency. Notably, much social science research can be connected. In the previous section, I use the example of a novel ethnographic methodology paper and sociological findings-focused paper developed from a single dataset. While these papers focus distinctly on methods and sociology topics respectively, it is likely that—dependent on subject matter—such papers could also somewhat easily reach a psychology, gender studies or anthropological audience. For this reason, direct duplication of findings—where no clear alternative argument, focus or contribution beyond the first publication argument is made, does not constitute a legitimate argument for a second, identical publication report of the sociological focussed write-up in a different disciplinary focussed journal. This is because these disciplines are not so stratified that they cannot be interlinked; they are not so polarised that it is inconceivable that scholars from the original discipline would not be able to access the original publication and make use of its content. When considering the possibilities of duplicate publication—from a legitimate perspective aimed towards heavily stratified areas of study, transparency during submission of publications is key. Scholars should contact both editors of the planned submission journals and state their case using the rationale outlined above. Prospective abstracts from both papers should be submitted to aid the process of differentiating different disciplines and audience, and for building a case outlining why additional publication is necessary. Upon submission of articles, journal guidelines should be followed closely, while this differs case to case, a typical request is for a copy of each paper to be shared with each journal, so reviewers and editors can decide on the benefits, costs and legitimacy of dual-publication requests.

What’s OK case 3: formulating a different overall argument and stand-alone contribution

Formulating an overall argument using data collected for one purpose, then revisiting this to examine another ‘secondary finding’ can be frowned up in some circumstances, as well as at times representing a difficult process that does not always lead to accurate representations of participants, situations or effects (see Antonio et al. 2020; Ruggiano and Perry 2019). For some qualitative studies, inappropriate repurposing of data is obvious, in that findings rarely focus directly or adequately on the topic in question, deviating often into topics that are more directly relatable to first observations from the data, sometimes more so than the secondary findings supposedly under (re)examination. This can present as misleading, and second-pass uses of data are sometimes clearly evident when reading multiple publications by same authors. For example: if—while conducting a study asking participants about their experiences of early morning bus travel—many participants mention and discuss the practicalities of making breakfast, then some may conceive that there exists sufficient data for a stand-alone publication on breakfasting practices. However, scholars must remain mindful of context: that is, the data collected referring to breakfasting practices; however, detailed and nuanced relate specifically to the context of breakfasting prior to early-morning bus travel, and participants were actively primed and engaged in research focussing first on this primary topic. Therefore, to extrapolate complex narratives about the breakfasting practices from this sample, and at worst case to possibly incorrectly generalise these as representative of a general population, sample is unethical, inaccurate and inauthentic. The case is further compounded should secondary studies neglect to explicitly present the original study context, debate its likely effects over secondary findings and discuss explicitly the linkages between first and second emergent themes.

When considering the possibility of revisiting data to discuss salient secondary themes, maintaining objective context is critical. Importantly, data should likely only be used to reinvestigate themes pertinent during the initial data analysis that emerged as related to the original research question, thus, allowing for a basis of linked research interests and prerequisite for replicability and rationale for further investigation. There is near-no evidence available that supports investigation for unrelated themes not significantly highlighted during first analysis, as this is suggestive more of a researcher ‘looking’ for themes, as opposed to these naturally emerging throughout the course of initial first investigation and being flagged for later enquiry. (However, it should be noted that there are growing sociological conversations surrounding the re-use of legacy collected qualitative research for new, and later comparative research study, for an overview see Åkerström et al. 2004; Wästerfors et al. 2014). In considering how then to proceed, the researcher should objectively analyse and assess the quality of the data collected and how fit for investigative purposes these data are. For example, if an emerging theme on breakfasting practices prior to early morning bus travel begins to transpire following (again, for example), thematic analysis—but is not fully developed in terms of quantity of data, detailed and nuanced discussions of the topic, or the presence of clear themes from which to compare and contrast data to draw conclusions, then this indicates the researcher should consider the possibility of re-approaching the sample to collect additional data on this theme, or (much more likely) recruit a further sample to develop a new stand-alone research study for which a corresponding publication can be developed and the limits of initial data collection addressed in a new recruitment strategy. It is not appropriate for a researcher to develop a stand-alone paper from a small, tightly ranged data sample linked to an earlier research investigation, without disclosing—in full—the original purpose of the first study and how this methodological design influenced the findings collected, as well as explicitly discussing the limitations of the population sample and how this relates to any later generalisations. The researcher must also be able to clearly justify why any new or linked thematic discoveries deserve publication as a separate write-up, and how they shed new light on first findings, in ways that do not immediately ring of the Least Publishable Unit approach typical of deliberate and unethical Salami Slicing.

However, and equally, data, especially large-scale qualitative data, can often involve detailed and lengthy threads of discussion linking back to observations, interviews and discussions that span multiple related topics. If a significant body of data can be extensively chartered during initial analysis that actively contributes to a salient and new, yet related, topic of discussion that elaborates on the original investigation or provides a different perspective to original findings that raises new discoveries, then this can be evaluated for further investigation. Crucial here is the aforementioned process of examining data for reliability—meaning: an objective assessment by the researcher of the richness of the data that also consider the context data was first collected in. For example, the researcher may ask themselves the question: Do I really have sufficient data to adequately investigate this emerging topic of discussion? Deciding this may involve first considering the ‘representativeness’ of the sample as a priority, and then using analytical tools to measure how frequently themes in data are replicated, how emergent secondary themes relate to similar first themes, how specific themes on same topic are constructed, and how themes relate back to the original research question under which the data were originally collected. ECRs may find Braun and Clarke’s six-tier method particularly useful for this (see Braun and Clarke 2006). If the quality and quantity of data are sufficient to develop a further analysis of the data for investigating this new theme, then this may be carefully approached, bearing in mind the context within which data were first collected and how this may affect the topic now under investigation.

Importantly, during the write-up and subsequent submission for publication, authors should maintain complete transparency about the origins of the data, for what original purpose, and how, data were collected, how data were analysed, what lead the researcher to develop this new topic thread from this same dataset, and link back to any earlier publications from the same dataset. This will not only allow others reading the work to remain objective surrounding the outcomes and how these were affected but will also anchor the author away from drifting into the practice of Salami Slicing, from the point of drawing needlessly from a single dataset using LPU principles, and/or from the perspective of artificially and retrospectively constructing a new hypothesis as applied to historical data. This is opposed to legitimately reconsidering the original hypothesis, and contrasting this against a further salient observed theme of legitimate and related importance that leads new perspectives on or additional to the original findings.

Discussion and conclusion

This paper has been developed to highlight some of the key concerns surrounding the oft-discussed, yet oft-unclear topic of Salami Slicing. Most saliently, many of the existing publications that describe Salami Slicing do so from the perspective of the term’s application to the hard sciences, specifically, the replication and fragmentation of studies that rely on quantitative data to demonstrate discoveries and perspectives through statistical replicability. Conversely, social science methods—especially within sociology and anthropology—often rely on large pools of carefully collected qualitative data that can span across different observational, discursive, and interview methods. Unlike the sometimes bespoke (and narrow) design of solely quantitative metrics, qualitative data can offer many distinct and different threads of new and relevant scholarly enquiry that are directly related, yet sometimes go beyond, the original research question under study. This is largely because the means to collect this data are less structured than quantitative methods, allowing for more range and diversity of data collected. In fields such as anthropology and sociology, data collection often occurs in remote and underexplored environments with unique demographics, providing a valuable opportunity for additional later publications and contributions that are important to share across different disciplinary fields and subfields where appropriate knowledge gaps can be identified. Some early-career researchers may misunderstand Salami Slicing, sometimes to the point of rigidly perceiving that publishing papers from the same dataset, using the same methods, or on topics of some relation, automatically represents poor ethical conduct, which is at times presented as misconduct. However, this is not always the case, as this paper outlines.

This article sheds some light on the ‘do’s’ and ‘don’ts’ vis-à-vis Salami Slicing as it can be interpreted relating to the rich, qualitative data often congruous with social science research. By presenting some clarifications and discussion, I hope to encourage early-career researchers to be mindful of not taking a ‘one-size-fits-all’ approach when considering the seemingly ‘hard’ limits of Salami Slicing as frequently discussed and (at times) misrepresented for quantitative ‘hard science’ scholarly research. The term is sometimes explained and contextualised without recognition for qualitative research and its complexities. I would welcome further discussion and debate on this topic to open new conversation surrounding how best to continue to clarify the term and its contemporary relevance within qualitative social science study, to further avoid misrepresentation and misunderstandings.