Following our analysis, we found that the guidelines analysed explicitly reflect the principles of respect for persons through informed consent and protection of vulnerable groups and beneficence through a balanced assessment of risks and harm as understood in the Belmont Report. On the other hand, the concept and applications of justice were rather implicit and showed a bigger shift from the original Report. These differences are summarized in Fig. 1 and will be discussed in more detail in the following sub-sections.
General definitions and scope of internet research
Out of the five guidelines analysed, only the BPS document reflects a definitive description of the activities and tools that can be categorised as internet research. BPS describes how internet-mediated research can involve data acquisition in the absence of face-to-face co-presence. This data can be quantitative (surveys and experiments) or qualitative (narratives) as well as reactive (participants interact with the materials such as online surveys and interviews) or non-reactive data (unobtrusively obtained data such as compilation of digital traces, hits, and analyses of found text). While lacking of an explicit definition of the activities involved in internet research, the AoIR, BERA, and NESH documents mention the role of internet both as a subject of research and a tool that spans from data collection to analysis and storage.
Respect for persons
The Belmont Report defines respect as the acknowledgment of participants’ autonomous participation and the need to protect those with diminished autonomy through the conduct of informed consent. Following the analysis of the document guidelines on internet research, we have found that this is the most explicit principle reflected in the guidelines in terms of the applications and modifications in securing informed consent and protecting vulnerable groups, summarized in Fig. 2.
We will now turn to a more detailed discussion of the conceptualisations and practical applications of this first principle in the guidelines analysed.
Informed consent
The voluntariness of participation in ethically sensitive research is achieved by making the respondents fully aware of relevant information regarding the research, its objectives, the methods that will be adopted, and its potential risks and benefits. Seeking informed consent has been the most common way of ensuring that the ethical principle of voluntariness is practiced.
In traditional research concerning human subjects, consent is usually sought by having a respondent sign a document that outlines all relevant information including the risks and benefits implied. However, the current guidelines bring to light some new dilemmas and issues in seeking consent in online settings. Indeed, seeking written informed consent typical of traditional face-to-face research may require modifications when implemented online for a number of practical reasons. BSA provides a comprehensive summary of scenarios that are exempt from informed consent and confidentiality, quoting different sources. We suggest consulting the original document for a more detailed list. Meanwhile, Table 1 provides a synthesis of some scenarios wherein informed consent may be challenging to obtain or may actually be waived as identified in the five documents analysed: AoIR, BERA, BPS, BSA, and NESH.
Table 1 When and how to waive or delay informed consent based on disciplinary associations’ guidelines
Firstly, the volume of content and individual members in big online forums makes approaching each individual participant both impractical and time-consuming. As BERA suggests, the online and mediated nature of the process makes it more difficult for researchers to explain and for the participant to understand what he or she is consenting to. Informed consent may also be difficult, and sometimes even impossible, to gather from forum respondents whose contact details cannot be accessed (Jones 2011). Hoser and Nitschke (2010) regard this difficulty as a barrier for implementing this privacy measure, thus making the issue of informed consent more challenging in online settings.
Secondly, the wider variety and increased access to digital tools to researchers make unobtrusive data collection methods more accepted in academic research beyond its more common use in digital marketing. Unobtrusive methods such as data mining and undisclosed observation of online groups gather found and already existing data (Lee 2000) without disclosing the researcher’s identity and without his or her involvement in communications (Eysenbach and Till 2001). In the same manner, the emerging field of learning analytics is subject to the same ethical dilemmas (Jones 2011).
The wider possibilities for data collection and the changing membership configurations in online communities make seeking informed consent less straightforward. Indeed, the propagation of both publicly accessible and membership-only online forums raises the question of whether consent should be sought from the individual, the community, or both. In light of these changes, BPS suggests the role of community moderators and list owners when deciding on seeking the consent of a group, reaching out to them whenever possible to inquire about how to approach the informed consent process especially when it remains unclear whether the data involved is public or not. In relation to the textual nature of internet data, BPS, BSA, and AoIR also advise researchers to be aware of legal and copyright issues and to consider gaining permission from the page author or web hosting company when linking to publicly available personal websites and social network content as well as when using screenshots or images taken from the web. BSA points out the usefulness of being acquainted with the specifications of licenses such as Creative Commons when dealing with data online, although approaching the author or owner of the text works best to avoid harm. Strictly speaking, copyrighted content is not considered public domain.
While the BPS and NESH contend that informed consent should be strived for whenever possible, they also acknowledge that it may be waived in unique circumstances. BPS states that this can happen when there is reason to believe that there is no expectation of privacy among the group being studied or when the study’s scientific and social value justifies undisclosed observation. NESH also asserts that researchers must be sensitive to the “integrity of context” especially that not all online users are aware that their private posts can be publicly accessible. One practical application of this is when accessing private groups. In this case, NESH declares that there is a higher expectation of privacy and, consequently, raises the concern for seeking valid consent and ensuring the anonymity of those involved.
Thirdly, informed consent in online settings may not always come in paper form. There are acceptable modifications suggested by AoIR, such as using digital signatures, virtual consent tokens, and click boxes. BPS also suggests that the completion of an online questionnaire can serve as an indication of consent, although the use of check boxes is always a good measure. It likewise recommends that consent through non-traditional means be simple enough to encourage the respondent to actually read its contents. A succinct presentation of the consent form in online settings is especially important to ensure that participants are thoroughly informed of the risks and benefits of the study and are not just randomly ticking boxes.
Lastly, researchers also have to be aware that opting to waive informed consent at the beginning of a study may actually be the most ethical approach in some cases, especially “when you want to present a specific case study or quote an individual or focus on a particular element” (Buchanan et al. 2010) or when withholding information (and thus informed consent) maintains the validity of the study (BPS). In such cases, BERA suggests that an ethical approach might be “to obtain informed consent when the project is at the point of reporting and the research subject can decide what is acceptable in relation to the way the research is to be reported” (n.p.), akin to what Roberts (2015) calls retrospective consent. According to BPS, the researcher should also put mechanisms in place to allow participants to be debriefed in the case of withdrawal from the study.
As suggested by the latest disciplinary associations’ guidelines, ethically sensitive research conducted on the internet is an ongoing process. This is particularly true when seeking informed consent from participants wherein perceptions and expectations of privacy can change across groups and time. The disciplinary associations’ guidelines reflected in this article argue for genuinely informed and valid consent that is not exclusively anchored in the traditional practice of signing a form. As Schneier (2000) and Orton-Johnson (2010) suggest, treating consent as a process rather than a product entails that we treat it not merely as a signed piece of paper. This can be achieved by the researcher making himself or herself available to the participant if any doubt regarding the research process arises (Whitehead 2007). As the tools for research expand, so should the ways in which consent can be sought and given.
Voluntary withdrawal in online studies
One of the aspects of the voluntariness of research participation in the Belmont Report is the freedom to withdraw from the study. BPS contends that compared to face-to-face studies, the risks are higher for internet-mediated research as withdrawal may happen without the researcher’s knowledge. This consequently makes debriefing impossible and for some data to be collected and stored even after the withdrawal of participation.
As a measure of ethical practice, more specifically when carrying out online surveys, BPS suggests that participants should be made aware of the possibility for withdrawal of participation during and after the study and for deletion of data that has already been partially collected. Making withdrawal procedures clear is also good practice by providing a visible withdraw button that leads to a debrief page and the option to withdraw the partially collected data. In cases where the participant requests withdrawal from the study after it has been completed, also known as retrospective withdrawal, the researcher should be able to honor this taking into account the governing data protection laws. An identification code may be assigned to individual participants at the start of the study to facilitate possible withdrawal in the future, especially when dealing with large data sets.
In online focus groups, withdrawal from data collected through unobtrusive means may prove to be challenging. BPS describes how this type of data collection may include the withdrawing participant’s identifiable information in the responses provided by other non-withdrawing participants and should thus be considered by researchers when storing and documenting data.
The contested notion of vulnerability
Protecting vulnerable groups is both an application of the principle of autonomy and beneficence, acting both as a gesture of respect for human dignity (Jones 2011) and a measure to minimize harm (BPS). AoIR, BERA, and BPS identify, albeit only in broad terms, which groups are deemed vulnerable in online settings. A point of convergence, however, is in the documents’ declarations in the treatment of children and minors as vulnerable and in the need to seek parental or guardian consent. Both AoIR and BERA suggest that the principle of proportionality be applied, in that researching more vulnerable groups requires more care on the part of the researcher to protect them from harm. Conversely, AoIR contends that predicting which groups are vulnerable and the harm that may be induced as a result of the study is a challenging task. As a result, they propose a reflexive and negotiated approach to this principle by clarifying the various notions of vulnerability and harm in context.
Beneficence
The Belmont Report states that an ethical approach to research involves the principle of beneficience, wherein the researcher seeks to minimise harm and maximise benefits for human subjects by conducting a risk–benefit assessment. This principle remains salient in the guidelines analysed, with AoIR regarding ethical decision-making as a balancing act wherein decisions such as waiving informed consent and using deception are made through a careful and proportional consideration of their benefits and risks. The notion of risk reflected in the Belmont Report basically alluded to psychological or physical pain or injury, although it likewise acknowledges legal, social, and economic harm that might result from one’s participation in research.
While several scholars contend that the online settings usually involve minimal risk (Kraut et al. 2004), others argue that and both online and offline settings require just equals amounts of sensitivity (Eynon et al. 2008). However, the guidelines analysed revealed some nuances in the ethical conduct of internet research in relation to the observance of the principle of beneficence, summarized in Fig. 3. As Buchanan (2011) suggests, the very existence of internet-specific guidelines conveys that there are new challenges arising from the emergence of this new research space and medium.
In the subsequent sections, we will discuss specific beneficence-related considerations outlined in the document guidelines analysed.
Privacy breach as harm: contextuality and culture-specificity of expectations of privacy
Following the analysis of the five disciplinary associations’ guidelines, it was found that protection of privacy is one of the most salient themes in this principle, thus reinforcing the assertion regarding the distinct nature of online research. Although privacy is not explicitly reflected in the original Belmont Report, it has since become a pressing concern and a source of potential harm that researchers should seek to mitigate. BPS suggests that the risks are greater for privacy breaches that are beyond the researcher’s control. This is corroborated by Kraut et al. (2004), describing how online studies generally involve minimal harm but can lead to negative results when data is misused. As Boyd (2010) points out, the greater risk to privacy in online settings is due to the fact that internet-mediated data and tools are persistent, searchable, scalable, and replicable.
As AoIR suggests, harm may manifest not only in physical but also in social, psychological, and economic terms. One of the prevalent ways that psychological and economic harm is induced in online social science studies is when one’s reputation is damaged due to a privacy breach (Gaiser 2008; Rasmussen 2008) and leads to the loss of a job. As such, all the guidelines analysed highlight the importance of researchers’ awareness of the specificities of the context in which the research is being undertaken and the participants’ corresponding expectations of privacy, especially given the global reach and cultural diversity found in internet settings (BPS). AoIR and BSA refer to this as a process-based and dialogic approach to ethics where the public or private nature of data and interaction spaces is subject to ongoing negotiation.
To address and mitigate privacy-related forms of harm, the analysed documents suggest three approaches. First, researchers are advised to keep the participants’ data and responses anonymous as well as to ensure that their identities cannot be tracked by using pseudonyms (NESH) and vignettes (BERA), releasing only general or conglomerated data (BERA), and using paraphrased statements (BPS). Secondly, BPS advises researchers to be conscientious of how they store and transfer data, with email correspondence and non-encrypted data storage being more prone to privacy breach. Lastly, while researchers should strive to exhaust all measures to protect the participants’ identities, they must also be wary of guaranteeing complete confidentiality when in fact it cannot be assured (BPS) due to the text-based and archiving capacities of internet technologies.
However, the AoIR presents a natural caveat to this: the risk cannot be always assumed in the varied contexts and changing meanings that the internet allows. This is exemplified in Bassett and O’Riordan’s (2002) study on an LGBT website, wherein an otherwise marginalised population sought for visibility by publicising sensitive data. Discarding the data altogether because of its sensitive nature can lead to further marginalisation. Indeed, “a rhetoric of ‘protection’ may result in furthering the unequal power relations of media production by blocking full representation of alternative media” (Bassett and O’Riordan 2002, p. 244). Researchers thus need to be aware that some sources providing sensitive data should not be elided just because the original contributors contact details, and therefore consent, are not readily available.
Risks and benefits for whom? Emerging issues in internet research
Another notable departure from the interpretation of beneficence in the original Belmont Report is the changing conceptions regarding the subjects and recipients of risk when conducting research. In addition to the concern for participants’ welfare and the benefits gained by the wider society, the guidelines analysed extend their interpretations of beneficence to online communities and the researchers themselves. The shift of language from society to communities reflect the underlying premise of trust that characterizes many virtual social spaces. NESH also points out that social and cultural movements that function on norms of openness and freedom abound in the online setting, thus making it imperative to consider adopting a sharing mindset when it comes to communicating the outcomes and benefits of research to the participants. The safety and interest of the researcher are likewise considered in the current interpretation of ethical internet research. According to the BSA, researchers should be wary about assuming a vulnerable position and being at the receiving end of abuse given their online presence.
Justice
In relation to the protection of vulnerable groups in the first principle of respect for persons, the Belmont Report interprets justice to be a fair distribution of the risks and benefits of research and exemplifies its application in the selection of subjects. As we had previously mentioned, our analysis revealed that the applications of the justice principle are the least explicit of all three principles in the guidelines. The absence of the nomenclature used in the Belmont Report may be due to the fact that the documents seek to provide guidance at a practical level. Secondly, research often involves overlapping applications of these principles wherein elements of the justice principle are subsumed in others, such as in the case of protecting vulnerable groups as a function of respect for persons. Thirdly, the implicitness of this principle may also be influenced by how different the manifestation of the principle of justice is in the textual (Bassett and O’Riordan 2002), and oftentimes social and participatory, nature of online research from the Belmont Report’s biomedical roots. In online research settings, the focus is less likely on treatments and interventions and is instead geared towards information retrieval and group participation. Lastly, it may well be that the very ethical configurations that frame research in general are changing, making the issue of fairness reflected in the Belmont Report insufficient in defining an ethical research practice. Because of these reasons, we drew on existing literature to illuminate the implicit interpretations of justice and identified some gaps in its formulation, summarized in Fig. 4 below and which we will discuss in more detail in the following sub-sections.
Protectionist versus inclusive ethics in internet research
We found an overarching protectionist stance in the Belmont Report when approaching vulnerable groups that runs parallel with Shore’s (2006) contention. This is understandably so, considering the fact that the conceptualisation of justice in the original Belmont Report was largely influenced by the exploitation of certain groups in the nineteenth and twentieth centuries for biomedical research. The prevailing social conditions at that time prompted an approach to ethics that was primarily concerned with avoiding the danger of recruiting certain profiles of individuals (e.g. ward patients, prisoners, rural black men, etc.) for experiments and clinical trials without them receiving the benefits of the research findings (National Commission 1979; Shore 2006). As a result, the Belmont Report pushed for a hierarchical selection procedure when recruiting subjects, opting for exclusion of disadvantaged groups to avoid further burdening them.
However, our analysis of the current internet research guidelines revealed a combination of protective and inclusive approach to the principle of justice. This is corroborated by Shore (2006), pointing out that the subsequent extensions of the Belmont Report from its original conception involve an awareness of the risk of certain groups’ exclusion. Together with making sure that groups with highly sensitive data are protected, especially as they relate to the interest of children and minors, AoIR asks whether these groups are being excluded from research due to the difficulties in securing ethical approval. Even BPS acknowledges that some political activist groups may be open to non-anonymous publications of their information and responses in line with their mission as a community. In this regard, we see how the discourse is shifting from an exclusionary to an inclusionary language in the application of justice. This is consistent with Bassett and O’Riordan’s (2002) study mentioned in the previous section, which provides a clear example of how an outright avoidance of what researchers would perceive as sensitive data can cause some voices to be silenced. They argue that academic researchers need to take into account the internet’s various uses for a wide range of groups as well as the cultural production and visibility that it could provide for traditionally marginalised populations. Walther (2002) and Kozinets (2015) also point out that both the textual and spatial qualities of internet use allow for different methodological approaches, such as a focus on the linguistic features of the text in the former and the more observational or anthropologic studies in the latter. As such, what counts as ethical decision-making would differ in these scenarios. Ultimately, researchers should be careful about swaying too far on the protectionist side when carrying out online research.
Shifting ethical frameworks
The inclusion–exclusion tension that surfaced in our analysis reveals a shift in the general understanding of what is deemed ethical in research. While the Belmont Report subscribes to fairness of distribution as an application of justice, the guidelines analysed are geared towards culturally sensitive and dialogic approaches that are open to negotiation. All the guidelines have retained the need to protect the welfare of those involved in the research, albeit an overarching theme is an appreciation of the multiplicity of meanings and expectations of individuals and communities online as well as the diversity of geographies, cultures and contexts in which they operate. Both the AoIR and the BSA promote a case-based or situational approach to ethics against a backdrop of changing technologies and possible uses of data found online. The reflexivity that characterizes postmodern thinking has likely contributed to the changing definitions of harm and vulnerability that affects researchers’ ethical decision-making. Thus, we cannot readily assume that groups that are traditionally considered vulnerable would want to be excluded from research.
Justice: gaps and future directions
We have found this principle to be the least developed in the documents analysed that, while not necessarily leading to unethical research, is something that warrants attention and explicitness in the future versions of these guidelines. The distributive justice that characterizes the Belmont Report (Longres and Scanlon 2001) seems insufficient to cover the relational justice requirements of community research that abound on the internet (Shore 2006). In practice, researchers are tasked to strike a balance between protection and effective participation. Adopting this approach opens up the possibility for researchers to what Hine (2005) identifies as political action through research, whose function is “to sidestep, transform, highlight, or reinvent some traditional political transformations, identities and inequalities” (p. 242). A widely reaching platform such as the internet can provide this leverage if used wisely.
Having considered the strong potential of the internet for democratization (Mann and Stewart 2011), it would be helpful if the current guidelines could alert researchers to the various forms of sampling and recruitment issues inherent in online research, an element that appears to be lacking in the guidelines’ current form. Several authors have alluded to the internet’s non-representativeness of the general population (Eysenbach and Wyatt 2002; Im and Chee 2011) Therefore, the challenge that researchers are confronted with involves deciding if the internet is an appropriate methodological tool and, depending on the research design, whether or not extrapolating research findings to the general population is appropriate (Walther 2002).