Abstract
The replication crisis has stimulated researchers around the world to adopt open science research practices intended to reduce publication bias and improve research quality. Open science practices include study pre-registration, open data, open access, and avoiding methods that can lead to publication bias and low replication rates. Although gambling studies uses similar research methods as behavioral research fields that have struggled with replication, we know little about the uptake of open science research practices in gambling-focused research. We conducted a scoping review of 500 recent (1/1/2016–12/1/2019) studies focused on gambling and problem gambling to examine the use of open science and transparent research practices. Our results showed that a small percentage of studies used most practices: whereas 54.6% (95% CI: [50.2, 58.9]) of studies used at least one of nine open science practices, each practice’s prevalence was: 1.6% for pre-registration (95% CI: [0.8, 3.1]), 3.2% for open data (95% CI: [2.0, 5.1]), 0% for open notebook, 35.2% for open access (95% CI: [31.1, 39.5]), 7.8% for open materials (95% CI: [5.8, 10.5]), 1.4% for open code (95% CI: [0.7, 2.9]), and 15.0% for preprint posting (95% CI: [12.1, 18.4]). In all, 6.4% (95% CI: [4.6, 8.9]) of the studies included a power analysis and 2.4% (95% CI: [1.4, 4.2]) were replication studies. Exploratory analyses showed that studies that used any open science practice, and open access in particular, had higher citation counts. We suggest several practical ways to enhance the uptake of open science principles and practices both within gambling studies and in science more generally.
Similar content being viewed by others
Avoid common mistakes on your manuscript.
Introduction
Many behavioral science research fields have surprisingly low likelihoods of replicating key findings (Camerer et al., 2016, 2018; Nosek & Errington, 2020; Nosek & Lakens, 2014; Open Science Collaboration, 2012, 2015) and struggle with publication bias (i.e., a preference for publishing statistically significant results; Anderson et al., 2017; Ferguson & Heene, 2012). Research stakeholders have proposed that using open science principles and practices, along with study replication, is one way of combating these threats to research quality and accuracy (e.g., Blaszczynski & Gainsbury, 2019; LaPlante, 2019; LaPlante, Louderback, & Abarbanel, 2021; Louderback, Wohl, & LaPlante, 2021; Munafò, 2016). Yet, recent scoping reviews of behavioral research generally and the substance-related disorders literature in particular have indicated that the use of practices such as pre-registration, open materials, open data availability, and more, has been limited (Adewumi, Vo, Tritz, Beaman, & Vassar, 2021; Hardwicke et al., 2021). Likewise, direct replications of published research are rare. For example, a substance-related disorders (e.g., opioid use disorder) scoping review study (i.e., Adewumi et al., 2021) observed only one replication study in a sample of 300 studies published during the 2014–2018 period.
Although gambling studiesFootnote 1 shares methods, study instruments, and theoretical perspectives with other types of behavioral research, including research on substance use disorders, information about the use of contemporary open science research practices in gambling studies is very limited (see LaPlante et al., 2021). Accordingly, in this article, we report the outcomes from a scoping review of a random sample of 500 studies about gambling-focused topics and systematically map the existing gambling research and identify gaps in open science research practices. In completing this review, we have provided new information about the nature of gambling-related research practices and potential practice deficits that risk a low likelihood of replication and a high likelihood of publication bias. This information is central to estimating the quality and methodological rigor of the published literature related to gambling and for providing clear recommendations for developing a high-quality, replicable research literature that can effectively inform evidence-based policy.
Background
Open Science Principles and Practices
Open science practices have grown in popularity in scientific research during the past decade (Banks et al., 2019; Nosek & Lindsay, 2018). This rise in interest and use of open science practices came in response to two major events in science: (1) the identification of a “replication crisis” in multiple disciplines, most notably in psychology, characterized by low success rates when replicating previously significant results on different samples (Klein et al., 2018; Lindsay, 2015; Loken & Gelman, 2017; Maxwell et al., 2015; Schooler, 2014), and (2) widely-publicized examples of fabrication and falsification of data in high-profile published scientific research (Callaway, 2011; Eggertson, 2010; Godlee, Smith, & Marcovitch, 2011; Schulz et al., 2016; Wicherts, 2011). Following these two events, scholars took steps to enhance the transparency, rigor and validity of scientific research by developing and promoting open science practices, including research pre-registration and Registered Reports, separation of confirmatory and exploratory analyses, open data, open materials and open access (e.g., Nosek & Lakens, 2014; Nosek et al., 2018; West, 2020). Each of these practices is encompassed under the larger umbrella of “open science” practices.
First, research pre-registration refers to a detailed public and time-stamped document containing research questions, hypotheses, study methods, and plans for analysis that is written before any data collection or analysis takes place. These documents are typically registered online through an independent organization, such as the Open Science Framework or a clinical trials registry (e.g., clinicaltrials.gov). Pre-registrations are usually made public, yet can be embargoed for a period of time (e.g., 6 months) to prevent the possibility that another researcher might “scoop” an idea before the original team is able to publish the results or to ensure that journal requirements for blinded submission are met.
Second, open science explicitly differentiates between confirmatory and exploratory analyses. Confirmatory analyses are pre-registered, based on a pre-planned design for analysis created before the data was collected with the aim of testing specific hypotheses about the sample(s). When developing confirmatory hypotheses, best practices for research suggest that an a priori power analysis should be carried out to determine the necessary sample size to detect statistically significant results (see O’Keefe, 2007). Exploratory analyses should be sufficiently powered; however, they are not pre-registered and consist of testing relationships among variables in the data that were not pre-planned. Exploratory analysis can provide the basis for future, pre-registered research that tests confirmatory hypotheses.
Third, open data pertains to making data available to the public in a freely accessible location with limited steps to access it. Providing data to the public increases transparency within research, allows independent observers to engage in reproducibility checks of published findings, and might encourage researchers to conduct novel independent or collaborative secondary analyses. In addition, open data can stimulate meta-scientific research that analyzes multiple datasets in systematic reviews and meta-analyses (Moreau & Gamble, 2020).
Fourth, open materials refer to sharing study components that are necessary for another researcher to conduct a replication of the study, such as survey questionnaires or experimental protocols. For materials to be open, they must be freely and easily accessible to the public. Offering open materials is intended to increase transparency within the research community, and to facilitate replication and extension studies.
Fifth, open access to the products of research includes several components, such as making a preprint of a paper available online (i.e., posting a preprint) for anyone to access or making the published version (i.e., an open access article) of an article freely available online to the public. Both preprints and open access articles are designed to disseminate scientific knowledge to anyone in the world with an internet connection, without having to pay money.
Reviews of Open Science Practice Use in Scientific Research Articles
Researchers have begun to examine the use of open science practices in peer-reviewed scientific research articles (Adewumi et al., 2021; Hardwicke et al., 2020, 2021; Iqbal et al., 2016; Norris et al., 2021; Wallach et al., 2018). For example, in one study that is comparable to the present study, Hardwicke et al. (2020) focused on a random sample of 250 articles from a diverse collection of social science disciplines published between 2014 and 2017. They observed that 11% of studies included open materials, 7% provided open data, 1% provided open analysis scripts, 1% were replication studies, and surprisingly, none of the 250 studies were pre-registered. They concluded that this lack of transparent and reproducible methods might be undermining the credibility of published scientific research. Another review of 250 articles from psychology, in particular, by Hardwicke et al. (2021) for the same time period reached similar conclusions regarding the use of open science practices, showing that 65% of articles were publicly available (i.e., with no paywall), 14% used open materials, 2% made their data open, and 1% provided open analytic code. Only 5% of the studies were replication studies.
Although these two review articles focused on different disciplines (i.e., social science and psychology, respectively), included fewer articles (i.e., 250 vs. 500 in the present study), and used a slightly older time period as compared to the present study (i.e., 2014–2017 vs. 2016–2019 in the present study), they provide a relevant baseline comparison for the prevalence of open science practices. Gambling studies is a multidisciplinary field that includes psychology and social science-based disciplines, and also uses similar methodologies to these fields, so it would be expected that similar prevalence rates for open science practices might be present in research on gambling-focused topics.
Open Science in Gambling Studies Research
Few studies to date have discussed open science practices within the field of gambling studies. Notably, four commentaries in gambling studies have examined open science practices and related concepts, as well as provided background on the open science movement and potential paths forward for gambling researchers and key stakeholders who might be new to open science (Blaszczynski & Gainsbury, 2019; LaPlante, 2019; Louderback et al., 2021; Wohl, Tabri, & Zelenski, 2019). These papers address issues ranging from journal policy changes to support open science, to strategies to improve gambling research by avoiding HARKing (Hypothesizing After the Results are Known; Kerr, 1998), to ways to advance open science and replication efforts among gambling researchers, to the protective effects that open science practices might provide for research integrity, even for industry-funded research. Empirical research addressing open science topics within the gambling studies field will provide needed support for some of the tips and strategies these early publications offer. Two such studies currently are available.
First, LaPlante et al., (2021) carried out a survey of gambling research stakeholders to provide a preliminary assessment of opinions on and use of open science practices. Questions of interest asked respondents to report on their engagement with four specific types of open science practices and some reasons for concern about open science (e.g., pre-registration potentially stifling research flexibility or creativity). They determined that minorities of gambling research stakeholders reported either some or extensive experience with open science: 44% indicated extensive or regular experience with open science practices, 31% with pre-registration, 33% with open materials or code, 48% with open data, and 16% with preprinting papers. This study suggested that there remains a broad need for open science education in the gambling studies field. However, the study was preliminary in that it relied upon a small convenience sample.
Second, Heirene et al. (2021) described how well gambling researchers’ study pre-registrations reduce Research Degrees of Freedom (RDoF), or the methodological choices that researchers must make when they collect data, analyze data, and report upon their findings (Wicherts et al., 2016). Their review of 53 available pre-registrations suggested that gambling pre-registrations had low specificity for most of the RDoF that they assessed; that is, the pre-registrations did not sufficiently constrain RDoF. A comparison of pre-registrations with 20 available publications or preprints also revealed that 65% of studies deviated from their pre-registered plans without declaring deviations in the publication or in a Transparent Change document. Heirene et al. (2021), interpreted these findings to suggest that, to date, researchers in the gambling studies field are not taking full advantage of the ability of pre-registration to tighten research practices for the purposes of improving the rigor and replicability of their work.
The Present Study
The purpose of the present study was to conduct a scoping review of recent gambling literature in order to assess the field’s current engagement with open science practices. The primary research question was: to what extent do peer-reviewed publications of original (i.e., non-review) quantitative research in the gambling literature use open science research practices, such as pre-registration, open data, and include replication studies with adequate statistical power? We hypothesized that a minority of peer-reviewed publications in the gambling literature will report using open science research practices or consist of replication studies with adequate statistical power.
Methods
Our study was pre-registered on the Open Science Framework (OSF) on 9/27/2019 before beginning the study search process (https://osf.io/f2prd), and our data and scripts are available on our OSF project page. All justified amendments to our pre-registered research plan are also included in Transparent Changes documents on our OSF project page. We drafted the protocol using the Preferred Reporting Items for Systematic Reviews and Meta-Analyses (PRISMA) extension for Scoping Reviews (Tricco et al., 2018). We created a PRISMA-style diagram of the study selection and screening process (see Fig. 1).
Eligibility Criteria
To be included in our scoping review, studies needed to measure or focus on gambling. We defined “gambling” broadly as encompassing the full range of gambling behavior, from gambling without experiencing gambling-related problems (i.e., Level 1), to sub-clinical Gambling Disorder (i.e., Level 2) to Gambling Disorder (i.e., Level 3) (Shaffer et al., 1999). We included studies if they (1) were peer-reviewed, (2) were published between January 1st, 2016 and December 1st, 2019,Footnote 2 (3) were written in English, (4) involved human participants, and (5) described quantitative data analyses. We excluded papers that failed to meet all five inclusion criteria. We also excluded editorials, commentaries, research proposals, study protocols, and review papers, such as meta-analyses, scoping reviews, and literature reviews. We specified the first four inclusion criteria during a database search, and the fifth criteria and the additional exclusion rules during our title, abstract, and full-text inspection.
Information Sources
To identify potentially relevant studies, we searched the following bibliographic databases covering a variety of scientific disciplines: Medline, Embase (medicine); PsycARTICLES, PsycINFO (psychology); Global Health (public health); the Education Resources Information Center [ERIC] (education); and the Social Science Premium Collection. We supplemented our search by searching the January 1st, 2016 to December 1st, 2019 archives of specialized gambling journals, including the Journal of Gambling Studies and International Gambling Studies. We exported the research results into EndNote and removed duplicate articles.
Search Procedures
We selected search terms by reviewing search terms used in published meta-analyses/systematic reviews of gambling. Specifically, we used the following searchFootnote 3 terms: “Gambl*”, “betting”, “wager*”, and “ludomania”. We used the appropriate truncation operator and search strategy format for each database, allowing any of the terms (i.e., terms using the Boolean operator OR, as opposed to AND).
Selection of Sources of Evidence
After we specified our initial sample of studies by employing the first five inclusion criteria during a database search and cross-checking against archives of specialized gambling journals (i.e., Identification), we used Google sheets and Endnote to screen the titles and abstracts of returned studies to assess their relevance (i.e., Screening). We deemed studies to be relevant if they appeared to describe gambling (as opposed to non-gambling-related gaming or video gaming studies, for example, as such articles are sometimes included in the Journal of Gambling Studies and International Gambling Studies). Vague study titles were retained for full text inspection. We used an iterative process to determine the reliability of our screening process. More specifically, two members of the research team independently screened the titles and abstracts of 10% of all retrieved studies to assess their relevance. They resolved any disagreements through discussion or further adjudication by a third reviewer. Their interrater reliability met our pre-registered criteria of κ ≥ 0.70 (i.e., Cohen’s κ = 0.790; McHugh, 2012), so they divided the remaining retrieved studies into two groups, and screened their titles and abstracts independently.
Next, three reviewers reviewed full texts of remaining studies to assess study eligibility according to the five inclusion criteria (i.e., Eligibility). They first screened 10% of all remaining full-test studies to assess their relevance. They resolved any disagreements through discussion or further adjudication by a third reviewer. Their interrater reliability for the initial full-text screening met our pre-registered criteria of κ ≥ 0.70 and α ≥ 0.70 (i.e., Fleiss’ κ = 0.711; and Krippendorff’s α = 0.711), so they divided the remaining full-text studies into three groups and screened their full-text PDFs independently. This allowed us to confirm that our inclusion criteria were fully satisfied. We only considered studies to be eligible if they meet all inclusion criteria. Eligible studies comprised our baseline sample. From the baseline sample of studies that remained eligible for data charting (i.e., 1,251 studies), we selected a simple random sample without replacement of 500 full-text studies using the sample() function in base R for our analytic sample of studies for charting.
Data Charting Process
We charted data from eligible studies and supplementary sources (e.g., links embedded in publications to pre-registration documents or open data archives) using Google Forms, a secure and customizable online survey platform. We created a custom Google Forms survey (see survey here: https://osf.io/ke3jg/) to record information for this study including all of the data items in Table 1. We used an iterative process to determine the reliability of our charting. Two independent reviewers charted the data items from a randomly selected subset of articles representing 10% of our analytic sample. They resolved any disagreements through discussion or further adjudication by a third reviewer. Prior to resolving disagreements, we assessed the charters’ interrater reliability. We calculated Fleiss’ κ and Krippendorf’s α for each data item. After four rounds of reliability coding, their interrater reliability met our standard for all except six items (see Transparent Change 5 and Transparent Change 6), so we divided the remaining studies into two equal groups and two coders charted them independently. As is described in more detail in our two Transparent Changes documents, to confirm that items that were initially below the reliability threshold were all correct, a Ph.D.-level co-author manually checked all 500 studies.
Data Items
We charted the studies on the data items listed in Table 1. Data items derive, in part, from Transparency and Openness Promotion criteria (TOP; Center for Open Science, 2019) and two recent scoping studies of open science research practices that were publicly pre-registered before we developed the pre-registration for the present study (Adewumi et al., 2021; Hardwicke et al., 2020). During charting, we modified the specific guidelines for assessing a few of the data items in Table 1.Footnote 4 For our charting purposes, we charted ‘yes’ for open access if and only if the final version of the study was available for download on the official journal website without logging in or paying money. We charted ‘yes’ for preprint availability if the preprint was a version of the article that was publicly available for download, yet was not typeset by the journal itself in its published version.
Data Analysis and Synthesis of Results
All analyses were conducted in R version 3.6.2 (R Core Team, 2019). For our main synthesis, we report tables of counts and percentages, with 95% confidence intervals, related to our data items (e.g., the count of and percentage of all studies that were open access). Using year of publication, we provide a year-by-year summary count of the number and percentage of publications per year that used any single open science research practice from our data items.
To better understand how study design might be related to open science practices, we analyzed ten cross-tabulations of study design (i.e., experimental vs. observational; in the columns) and ten study characteristics related to open science (in the rows). We also investigated how open science practices might have increased, decreased, or remained stable over time by analyzing ten cross-tabulations of year of study publication (in the columns) and these same ten study characteristics (in the rows). We tested for the significance of any associations in our cross-tabulations with Chi-square tests using a p < 0.05 criteria for statistical significance and Cramer’s V to interpret the size of effects.
Using the final charted data, we created a master spreadsheet with a Characteristics of Included Studies (COIS) table that summarizes study-level information for all data items (https://osf.io/r2n3m/). We generated a unique numeric identifier for each study (i.e., a COIS ID) to identify studies. In the Results section, we synthesize the outcomes of our review narratively to provide an overall description of the extent to which open science practices are used in the body of relevant research evidence available. The R scripts used to conduct the reliability analyses, random study selection, and analyses described in the Results are available on our OSF Project page (https://osf.io/xw7gf/).
Results
Open Science-Related Measures
Table 2 displays counts, percentages, and 95% confidence intervals for open science practices in our sample. Rates of engagement with open science practices ranged from 0% for open notebooks to 35.2% (n = 176) for open access. Overall, 54.6% (n = 273) of studies used at least one of the nine open science practices. Of the 500 studies, we found that 4.4% (n = 22) reported conducting an a priori power analysis and 2.0% (n = 10) reported conducting a post-hoc power analysis. For replication status, 488 (97.6%) studies were original studies and 12 (2.4%) were conceptual replication studies. None of the 500 studies were primary (i.e., direct) replications. Thus, our research hypothesis stating that a minority of studies will use open science practices is partially supported.
Past Funding Statement and Conflict of Interest Statements
As shown in Table 1, we also coded the presence of past funding (e.g., In the past 5 years, author 1 has received funding from X, Y, Z, etc.) and conflict of interest statements. Such statements provide transparency about potential and actual conflicts of interests, and are intended to prevent obscuring funding sources for each author. For past funding statements, 13.2% (n = 66) of studies included this statement for at least one author. For conflict of interest statements, 21.4% (n = 107) of studies included a statement with potential conflict(s), whereas 56.4% (n = 282) included a statement that indicated no conflicts existed. Thus, overall, 77.8% (n = 389) of studies included a conflict of interest statement.
Sample Size and Study Design
Sample sizes for studies varied considerably, ranging from n = 1 to n = 267,367, with a median sample size of n = 328 and mean sample size of n = 3,321 (SD = 16,997).Footnote 5 For study design, there were substantially more observational studies (82.0%, n = 410) than experimental studies (18.0%, n = 90).
Gambling Concept(s) Measured
For all studies, we coded whether a study measured: gambling participation/involvement, the presence/severity of gambling problems, and/or other gambling concept(s) (specify). We found that 84.6% (n = 423) measured gambling participation/involvement, 49.0% (n = 245) measured presence/severity of gambling problems, and 36.6% (n = 183) measured both gambling participation/involvement and presence/severity of gambling problems. For other gambling concepts, 33.6% (n = 168) of studies measured these, and these concepts included a wide variety of different concepts, ranging from gambling-related cognitions to parental gambling to gambling motives.
Planned Confirmatory Analyses
Study Design and Open Science Practice Variables
Table 3 shows the comparisons of frequencies for open science practice variables for experimental studies and observational studies. A significantly greater percentage of experimental studies used at least one open science practice, as well as pre-registration and a power analysis. There were no significant differences between experimental and observational studies for the other seven items.
Year of Study Publication and Open Science Practice Variables
Table 4 shows the comparisons of open science practices over time. The number of studies included from each year was relatively similar. None of the open science practice variables yielded significant differences across the four-year period.
Unplanned Exploratory Analyses
Evidence Map of Open Science Practices, Gambling Concept(s) Measured and Study Design
During the course of the study, we conducted several sets of unplanned exploratory analyses, which we report in this section. First, similar to the evidence map created in Gray et al.’s (2021) scoping review of gambling and self-harm, we created an evidence map (see Supplemental Table 1 in Online Supplement) that shows the COIS ID numbers for studies based on the gambling concept(s) they measured, whether they used each open science practice, and study design (i.e., experimental in bold and observational un-bolded). This table indicates that studies focused on gambling participation the most, followed by presence or severity of gambling problems, and finally, other gambling concepts. The most common open science practice was open access, followed by preprint, and open materials. Interestingly, while very few studies were pre-registered, the majority of studies that did pre-register (75%) were experimental.
Funder Type and Open Science Practices
In a second set of exploratory analyses, to better understand how studies were funded and how this might have been associated with open science practice use, we collapsed the nine funder categories into three conceptual groupings for analysis, including Group 1: Private foundation, University or No Funding, Group 2: Government, and Group 3: Industry Direct. These unplanned exploratory analyses were developed after conducting the confirmatory analyses. We selected a three-group approach to maintain sufficient statistical power and expected cell counts for conducting Chi-square analyses. Table 5 shows these analyses. Direct industry-funded studies were more likely to use any open science practice, open access, and open materials. Industry-funded studies were also more likely to report the use of a power analysis.
Study Citation Count and Open Science Practices Used
In a third set of exploratory analyses (see Table 6), we examined the number of citations that each article received by conducting a Google Scholar search for each article title during the week of August 23rd, 2021. Next, to examine how open science practices might have been associated with citation counts, we ran independent sample t-tests comparing the mean citation count for studies that did or did not use any open science practice, as well as each of the nine separate open science practices. These results showed that studies that used any open science practice (t = −2.1423; p < 0.05; d = 0.186) and those that had open access availability (t = −2.0727; p < 0.05; d = 0.218) both had higher mean citation counts. We found no other significant differences for any of the eight remaining open science practices. In response to reviewer comments, we also conducted unplanned exploratory analyses of any open science practice and the nine specific open science practices, and citation counts, separately within observational studies (n = 410) and experimental studies (n = 90). These results are reported in Supplemental Tables 2 and 3 in the Online Supplement. We did not find any significant differences in these analyses.
Discussion
The scientific study of open science practices in the gambling research field is limited. This scoping review examined the use of open science practices in a random sample of 500 gambling studies research publications for 2016 through 2019. More than half (54.6%) of the studies used at least one open science practice. Although this is not a large majority, it ran counter to our expectations that a minority would have used at least one open science practice. We also found that—with the exception of open access and preprint posting—all open science practices had very low prevalence (i.e., fewer than 10% of studies used each practice). Temporal trends might be suggestive of increasing open science practice use over time, but they were not statistically significant. Small cell counts and limited statistical power likely prevented us from observing statistically significant changes for each open science practice by year.
We found that experimental studies were more likely to have been pre-registered and to report a power analysis. This suggests that experimental studies—which often have small sample sizes and registration requirements for participant recruiting (e.g., in clinical trials)—might be more concerned with the necessary sample size to ensure adequate statistical power and must also pre-register to comply with mandates (e.g., for FDA-funded trials). In contrast, observational studies often (but not always) have larger sample sizes, even into the 1,000s. For these types of studies, statistical power might be less of a concern and no regulatory mandates exist to encourage pre-registration.
Unplanned exploratory analyses of study funder groupings showed that industry funded studies tended to use more open science practices, including open access, open materials, and a power analysis, when compared to government, private foundation, university funded, or unfunded studies. There are two potential explanations for this pattern of findings. First, it is possible that researchers—being aware of criticisms of industry funded research and thus seeking objectivity (see Cottler et al., 2016)—intentionally use more open science practices to help emphasize the firewall of independence between researchers and their industry funders. Indeed, this particular issue is discussed by Louderback et al. (2021), going so far as to develop guidelines for the integration of open science practices within industry funding models. A second, and more mundane explanation, which is relevant for open access in particular, is the desire for industry funders to view and distribute the products of research without paywalls. The findings that government-funded studies also were more likely to be open access might also reflect this explanation; government funders are increasingly mandating that published study manuscripts be made available with open access status (Rabesandratana, 2019), which undoubtedly impacts whether studies are indeed publicly available without paywalls.
Additional unplanned exploratory analyses of citation counts showed that studies that used any open science practice or that had open access availability were cited with greater frequency. It is possible that using more open science practices made studies more visible to larger audiences, for example, by allowing interested readers to download studies without a paywall, thus increasing citation counts. These findings indicating higher citation rates among open access articles are consistent with studies in other scientific disciplines (Basson et al., 2021; Kousha & Abdoli, 2010; Norris et al., 2008). Further, a recent systematic review (Langham-Putrow et al., 2021) noted a small citation advantage for open access articles. Research suggests that a citation advantage for open access articles is likely due more to greater article visibility and accessibility, rather than the self-selection of more rigorous or impactful articles into being open access (Gargouri et al., 2010; Langham-Putrow et al., 2021). However, we were unable to empirically test this possibility in the present study. Unlike Colavizza et al. (2020), however, we did not find that studies with open data received more citations. The low prevalence and small cell counts for other practices might have prevented us from finding similar citation rate patterns for open data, code, and materials. Low prevalence and small cell counts might have also played a role in the non-significant results for citation counts in exploratory analyses within observational and experimental studies separately. Follow up studies should replicate these analyses when open science practices have developed to a greater degree and are more prevalent in gambling studies.
Importantly, the present study represents the first scientific scoping review of open science practice usage in contemporary gambling studies research. We reviewed the largest representative sample of publications to date in reviews of open science practices, which provides more precise point estimates (i.e., narrower confidence intervals). Our results showing that open science practices are limited in published gambling research studies are largely consistent with other recent scoping reviews of open science practice use in the social sciences (Hardwicke et al., 2020, 2021) and with gambling researchers’ self-reported use of open science practices (LaPlante et al., 2021); this key finding suggests that there is considerable progress needed to be made in terms of integrating open science principles into gambling studies research.
It also raises potential concerns about limited methodological transparency and essential materials (e.g., data, analysis scripts, etc.) sharing in gambling studies research, both of which can potentially result in limitations in the rigor, relevance, and replicability of scientific research (Allen & Mehler, 2019; Gorgolewski & Poldrack, 2016). A lack of sharing study materials might also act as a barrier to scientific innovation (Conrado et al., 2017), and thus could slow efforts to build a body of knowledge that leads to the implementation of evidence-based approaches to the identification, prevention and treatment of Gambling Disorder and gambling-related harm. The next sections provide a more in-depth discussion of these implications and include practical suggestions for enhancing the uptake and use of open science practices among gambling researchers. These suggestions can also be extended to all scientific disciplines.
Practical Implications
The present findings suggest a need for more researchers to be involved with open science within the multidisciplinary field of gambling research. One effective source of change can be young researchers, such as students and those who are early on in their scientific career. If we can teach engagement with open science knowledge and practices early on, it can become a regular and habitual part of the research process as young researchers move forward and advance in their careers (see Allen & Mehler, 2019 for open science tips for early career scholars). Several resources exist to help provide information and entry points for making these practices a part of research. For example, Banks et al. (2019) provided clear answers to 18 questions about open science practices. These questions and answers provide information on the principles of open science, the benefits and challenges of engaging in open science practices, the progress that has been made thus far to adopt open science practices, and steps that could to be taken in order to facilitate the uptake of open science practices. This article is one of many resources (also see Gehlbach & Robinson, 2021) that clearly break down the principles of open science and detail how researchers can take steps to engage in such practices.
Organizations and educational programs have already been working to teach young researchers about open science, including the Center for Open Science, who run the Open Science Framework, and the FOSTER Open Science organization (FOSTER, 2021). FOSTER is an organization based in the European Union (E.U.) that aims to create a cultural change in which open science is integrated into research practices and rewarded within the research community. The organization hopes to reach this goal through a fully open and free training program that provides a continued support network once the training is completed. Individuals who complete the training receive a badge as an Open Science Ambassador. FOSTER also created resources such as the Open Science Toolkit and the Open Science Training handbook, providing additional opportunities to promote and take advantage of this program to disseminate information about the benefits of open science and how it can be integrated into research practices.
In order to engage more established researchers in utilizing open science practices, academic institutions might consider enacting requirements for these practices as conditions for tenure and promotion. These requirements might be similar to requirements that already exist for evidence of the number of publications or of citations of one’s work in order to progress within the institution (Mu & Hatch, 2021). Additionally, integrating open science practices as a basic standard for research methods in undergraduate, graduate, and continuing education courses would help to influence an overall culture shift towards embracing such principles and practices.
Policy Implications
Because our findings suggest that there is a need for increased engagement with open science practices, policies might be considered to help facilitate knowledge and engagement. As the gatekeepers to publications, journals have the ability—and some might argue the responsibility—to take a leading role in this transition by creating policies that promote open science practice use in published peer-reviewed research (see Nutu et al., 2019 for initial steps taken by psychology journals). In a recent article, Aguinis and colleagues (2020) discussed recommendations for scientific research to narrow the gap between open science theory and practice. Specifically, they provide 10 actionable recommendations of policy changes for authors and journals that could help narrow the science-practice gap in open science. An example of one recommendation is for Editors to introduce a policy for results-blind review, which involves submitting a full manuscript for review that excludes the results section. This practice might reduce Editor and reviewers’ positive results bias, and promote a stronger focus on the theoretical and practical impact of the study itself rather than whether or not the findings were statistically significant (see Ingre & Nilsonne, 2018). Other important recommendations for journals discussed by Aguinis et al. (2020) include: (1) requiring a pre-registration for every study, (2) publicly sharing the data, code, and materials for all studies, (3) creating a review track that includes Registered Reports, (4) providing an online archive for each journal article, such as the Open Science Framework, for authors to post their study materials, (5) creating a best-paper award that is based on the use of open science criteria, and (6) providing access to open science training for all research stakeholders.
Beyond these suggestions for journals in particular, recommending that funding agencies modify their policies in order to facilitate open science practices is another potential avenue for change. For example, funding agencies could require that authors receiving funding must pre-register and conduct a power analysis in their studies, and as noted above, government funding bodies in the United States (U.S.) and E.U. have already started mandating that peer-reviewed publications be available via open access (Rabesandratana, 2019). Moreover, funders could also encourage that de-identified data collected for a study be made public after a period of time, thus facilitating open data sharing. All of these recommendations for policy changes require minimal additional resources besides time investment. However, both funders and research institutions must recognize and account for this additional time investment to enable researchers to engage with open science practices.
Study Limitations
Our study is not without limitations. First, we only included 500 articles out of the total sample (N = 1251) of relevant articles that we found during our database search. However, the articles were randomly selected from the total sample so they are representative of all of the articles that met the inclusion criteria. Nonetheless, we reported all of our point estimates with confidence intervals to model potential sampling error. Second, we only included articles that were published between January 1, 2016 and December 1, 2019. As open science practices have been on the rise in recent years, it is likely that we would have found even less engagement with open science practices in the gambling literature if we had included studies from before 2016. In addition, the COVID-19 pandemic would have confounded practices used in studies published in 2020 and 2021. Third, we charted items using an agreed upon coding scheme to maintain consistency, and other authors might have developed a different coding scheme for charting data items. For example, we coded conflict of interest statements as being present for a study based on statements with different headers (e.g., “Competing Interests Statement”, “Declaration of Competing Interests”, “Conflict of Interest Statement”), and other researchers might have only coded a study as having a conflict of interest statement if it was titled “Conflict of Interest Statement”.
Directions for Future Research
Based on our study, there are five avenues for future research. First, when collecting data on the articles reviewed in this study, we also recorded the titles of the journals that these articles were published in. Future studies should look into exploring and comparing open science practices across various journals. There is potential for an association between journals and their preferred open science practices, or a pattern showing that publications in some journals practice open science to a greater degree. The same approach could be extended to compare the presence of replication studies across journals. To our knowledge, no scoping reviews comparing open science and replication practices across specific journals or journal groupings (e.g., by impact factor and/or fully open access vs. non-fully open access) have been conducted.
Second, it would be useful and interesting to test the extent to which studies that used more open science practices were more likely to report non-significant findings (i.e., null results). As we described previously, one of the key motivations for the open science movement was to increase transparency in the research process and to prevent researchers from engaging in practices such as “p-hacking” (Berman et al., 2018) or HARKing (Bosco et al., 2016; Kerr, 1998) in order to be able to report significant findings that many journals prioritize over null findings. It is therefore reasonable to assume that when authors engage in more open science practices (e.g., pre-registration), they are more likely to report findings that are not significant. Existing evidence has found that Registered Reports in psychology do tend to yield far lower rates of statistical significance as compared to regular articles (Scheel et al., 2021), so examining this relationship in other disciplines would be an important contribution in this area. Beyond Registered Reports, it would be interesting to examine the association between other open science practice use and reporting non-significant findings, and to see whether there are other moderating factors that play a role in this relationship, such as the journal an article was published in and/or the funder(s).
Third, another fruitful avenue of research is related to ways that the scientific community can promote the use of open science practices for gambling researchers through educational programs (Crüwell et al., 2019) and journal incentives (e.g., Open Science Badges; see Grahe, 2014). Given the findings in the present study and other research which found that only a minority of gambling research stakeholders report engaging in open science practices (LaPlante et al., 2021), additional research into motivations for and barriers to the use of open science practices is warranted, specifically for researchers publishing gambling literature and for science more broadly. Because we now know that the use of open science practices was relatively low in general for recent gambling-focused research, a better understanding of attitudes towards open science will help provide next steps for such practices in the months and years to come.
Fourth, future research could look into the standardization of conflict of interest statements and recent funding statements across articles, and whether the lack of clear statements regarding funding is associated with industry-funded or other types of funding arrangements for research. Our data charting process showed that there is not currently a standard for the conflict of interest statement and statements listing funding that a researcher has received in the past. It would be interesting to see if the articles with certain types of statements or those without such statements altogether were associated with certain funding sources, and in turn if those studies had more significant (or non-significant) findings. Creating clear guidelines (e.g., International Gambling Studies’ five-year past funding disclosure policy) across journals will require authors to be transparent about their funders and their research more generally.
Fifth, given the rise in open science practices and the current peer-review process, it is important that future research examines the extent to which reviewers are educated and informed about open science, including pre-registration and deviations from pre-registered plans (see Heirene et al., 2021), separation of confirmation and exploratory analyses, and scientific concerns (e.g., HARKing). The current peer-review process can include reviewers suggesting changes to the methodology, data analyses, hypotheses and presentation of results (Jana, 2019), which can potentially lead to questionable research practices including HARKing, selective outcome reporting (e.g., asking to not report non-significant results), or p-hacking in revised study manuscripts, potentially leading to publication bias due to the omission or removal of non-significant findings. In response, future studies could investigate how often reviewers suggest deviations from pre-registered study plans and how differences in levels of open science knowledge across reviewers might impact elements of publication decisions.
Conclusion
Our findings suggest a large potential for growth in open science practice usage among gambling studies researchers, which also signals a substantial need for education on open science topics for established and aspiring researchers alike. Although the prospect of integrating open science as a standard methodological practice might be daunting to some, it is clear that the benefits of this research evolution will be multifold. However, alongside efforts that promote structural changes in how researchers approach their work must be efforts to promote structural changes to the work environment that support open science practices. Research institutions, journals, and funders must support researchers to engage with open science practices by providing sufficient time, training, and incentives (e.g., publication/promotion requirements) in order to maximize the uptake of open science practices and, in doing so, substantially improve the quality and transparency of the scientific evidence base.
Data Availability and Materials
Data and materials for this article are available on the Open Science Framework (https://osf.io/xw7gf/).
Notes
We refer to “gambling studies” as a multidisciplinary research field that examines micro-level (e.g., gambling frequency) and macro-level (e.g., national prevalence of gambling) gambling behavior, problem gambling, DSM-5 criteria Gambling Disorder, and risk factors for experiencing gambling-related harm.
This time period is particularly relevant for open science because the Pre-registration Challenge—which offered researchers $1,000 for each successfully published pre-registered study—was originally launched by the Open Science Framework in December 2015, and led to greater interest in open science and a substantial rise in pre-registered research (Mellor et al., 2019). The end date for the search was selected because we originally conducted the database search in December, 2019, which was only 2 to 3 months prior to the rise of the COVID-19 pandemic, which has greatly impacted open science and science more generally worldwide.
Our initial pre-registered search terms included “gaming”; however, this search strategy yielded many irrelevant studies (e.g., examining video gaming or sports games), so we transparently detailed this modification in a publicly posted Transparent Change document on OSF (i.e., Transparent Change 2).
During the charting process, the team made changes to the guidelines for assessing a few of the data items. These items included the study date, open access, preprint, and, to a much greater extent, the study funder. The modifications were made to increase the consistency and clarity of the charting process for coders. These changes are described in full detail in Transparent Change 1 and Transparent Change 4, which are publicly available on OSF.
While not extremely large, especially for the median, the median and mean sample sizes do exceed the long-standing conventional recommendations for minimum sample sizes (i.e., n > ~ 50; see Green, 1991) for conducting many parametric statistical tests (e.g., regression). However, depending on the desired effect size (i.e., small, medium or large), alpha level (e.g., α = 0.05) and power (1–β), these sample sizes might be insufficiently powered to detect actual statistically significant results (Cohen, 1992; O'Keefe, 2007), yielding a greater probability of incurring a Type II error (i.e., the null hypothesis that there is no difference or effect is incorrectly accepted as true, preventing the detection of an actual statistically significant difference or effect that would provide support for the alternative hypothesis).
References
Adewumi, M. T., Vo, N., Tritz, D., Beaman, J., & Vassar, M. (2021). An evaluation of the practice of transparency and reproducibility in addiction medicine literature. Addictive Behaviors, 112, 106560.
Aguinis, H., Banks, G. C., Rogelberg, S. G., & Cascio, W. F. (2020). Actionable recommendations for narrowing the science-practice gap in open science. Organizational Behavior and Human Decision Processes, 158, 27–35.
Allen, C., & Mehler, D. M. (2019). Open science challenges, benefits and tips in early career and beyond. PLoS Biology, 17(5), e3000246.
Anderson, S., Kelley, K., & Maxwell, S. (2017). Sample-size planning for more accurate statistical power: A method adjusting sample effect sizes for publication bias. Psychological Science, 28(11), 1547–1562.
Basson, I., Blanckenberg, J. P., & Prozesky, H. (2021). Do open access journal articles experience a citation advantage? Results and methodological reflections of an application of multiple measures to an analysis by WoS subject areas. Scientometrics, 126(1), 459–484.
Berman, R., Pekelis, L., Aisling, S., & Van den Bulte, C. (2018). p-Hacking and False Discovery in A/B Testing. Retrieved from SSRN: https://doi.org/10.2139/ssrn.3204791
Blaszczynski, A., & Gainsbury, S. M. (2019). Editor’s note: Replication crisis in the social sciences. International Gambling Studies, 19(3), 359–361.
Banks, G. C., Field, J. G., Oswald, F. L., O’Boyle, E. H., Landis, R. S., Rupp, D. E., & Rogelberg, S. G. (2019). Answers to 18 questions about open science practices. Journal of Business and Psychology, 34(3), 257–270.
Bosco, F. A., Aguinis, H., Field, J. G., Pierce, C. A., & Dalton, D. R. (2016). HARKing’s threat to organizational research: Evidence from primary and meta-analytic sources. Personnel Psychology, 69(3), 709–750.
Callaway, E. (2011). Report finds massive fraud at Dutch universities. Nature, 479(7371), 15.
Camerer, C. F., Dreber, A., Forsell, E., Ho, T. H., Huber, J., Johannesson, M., Kirchler, M., Almenberg, J., Altmejd, A., Chan, T., Heikensten, E., Holzmeister, F., Imai, T., Isaksson, S., Nave, G., Pfeiffer, T., Razen, M., & Hang, W. (2016). Evaluating replicability of laboratory experiments in economics. Science, 351(6280), 1433–1436.
Camerer, C. F., Dreber, A., Holzmeister, F., Ho, T. H., Huber, J., Johannesson, M., Kirchler, M., Nave, G., Nosek, B. A., Pfeiffer, T., Altmejd, A., Buttrick, N., Chan, T., Chen, Y., Forsell, E., Gampa, A., Heikensten, E., Hummer, L., et al. (2018). Evaluating the replicability of social science experiments in nature and science between 2010 and 2015. Nature Human Behaviour, 2(9), 637–644.
Center for Open Science. (2019). The TOP Guidelines. Retrieved from https://cos.io/top/
Cohen, J. (1992). A power primer. Psychological Bulletin, 112(1), 155–159.
Colavizza, G., Hrynaszkiewicz, I., Staden, I., Whitaker, K., & McGillivray, B. (2020). The citation advantage of linking publications to research data. PLoS ONE, 15(4), e0230416.
Conrado, D. J., Karlsson, M. O., Romero, K., Sarr, C., & Wilkins, J. J. (2017). Open innovation: Towards sharing of data, models and workflows. European Journal of Pharmaceutical Sciences, 109, S65–S71.
Cottler, L. B., Chung, T., Hodgins, D. C., Jorgensen, M., & Miele, G. (2016). The NCRG firewall works. Addiction, 111(8), 1489–1490.
Crüwell, S., van Doorn, J., Etz, A., Makel, M. C., Moshontz, H., Niebaum, J. C., Orben, A., Parsons, S., & Schulte-Mecklenbeck, M. (2019). Seven easy steps to open science. Zeitschrift für Psychologie. https://doi.org/10.1027/2151-2604/a000387
Eggertson, L. (2010). Lancet retracts 12-year-old article linking autism to MMR vaccines. Canadian Medical Association. Journal, 182(4), E199–E200.
Ferguson, C. J., & Heene, M. (2012). A vast graveyard of undead theories: Publication bias and psychological science’s aversion to the null. Perspectives on Psychological Science, 7(6), 555–561.
FOSTER. (2021). About the FOSTER portal. https://www.fosteropenscience.eu/about.
Gargouri, Y., Hajjem, C., Larivière, V., Gingras, Y., Carr, L., Brody, T., & Harnad, S. (2010). Self-selected or mandated, open access increases citation impact for higher quality research. PLoS ONE, 5(10), e13636.
Gehlbach, H., & Robinson, C. D. (2021). From old school to open science: The implications of new research norms for educational psychology and beyond. Educational Psychologist, 56, 79–89.
Godlee, F., Smith, J., & Marcovitch, H. (2011). Wakefield’s article linking MMR vaccine and autism was fraudulent: Clear evidence of falsification of data should now close the door on this damaging vaccine scare. BMJ British Medical Journal, 342(7788), 64–66.
Gorgolewski, K. J., & Poldrack, R. A. (2016). A practical guide for improving transparency and reproducibility in neuroimaging research. PLoS Biology, 14(7), e1002506.
Grahe, J. E. (2014). Announcing open science badges and reaching for the sky. The Journal of Social Psychology, 154(1), 1–3.
Gray, H. M., Edson, T. C., Nelson, S. E., Grossman, A. B., & LaPlante, D. A. (2021). Association between gambling and self-harm: A scoping review. Addiction Research and Theory, 29(3), 183–195.
Green, S. B. (1991). How many subjects does it take to do a regression analysis. Multivariate Behavioral Research, 26(3), 499–510.
Hardwicke, T. E., Wallach, J. D., Kidwell, M. C., Bendixen, T., Crüwell, S., & Ioannidis, J. P. (2020). An empirical assessment of transparency and reproducibility-related research practices in the social sciences (2014–2017). Royal Society Open Science, 7(2), 190806.
Hardwicke, T. E., Thibault, R. T., Kosie, J. E., Wallach, J. D., Kidwell, M. C., & Ioannidis, J. P. A. (2021). Estimating the prevalence of transparency and reproducibility-related research practices in psychology (2014–2017). Perspectives on Psychological Science. https://doi.org/10.1177/1745691620979806
Heirene, R., LaPlante, D., Louderback, E. R., Keen, B., Bakker, M., Serafimovska, A., & Gainsbury, S. M. (2021). Preregistration specificity & adherence: A review of preregistered gambling studies & cross-disciplinary comparison. PsyArXiv Preprint. https://doi.org/10.31234/osf.io/nj4es
Ingre, M., & Nilsonne, G. (2018). Estimating statistical power, posterior probability and publication bias of psychological research using the observed replication rate. Royal Society Open Science, 5(9), 181190.
Iqbal, S. A., Wallach, J. D., Khoury, M. J., Schully, S. D., & Ioannidis, J. P. A. (2016). Reproducible research practices and transparency across the biomedical literature. PLoS Biology, 14(1), e1002333.
Jana, S. (2019). A history and development of peer-review process. Annals of Library and Information Studies, 66(4), 152–162.
Kerr, N. L. (1998). HARKing: Hypothesizing after the results are known. Personality and Social Psychology Review, 2(3), 196–217.
Klein, R. A., Vianello, M., Hasselman, F., Adams, B. G., Adams, R. B., Jr., Alper, S., Aveyard, M., Axt, J. R., Babalola, M. T., Bahník, Š, Batra, R., Berkics, M., Bernstein, M. J., Berry, D. R., Bialobrzeska, O., Binan, E. D., Bocian, K., Brandt, M. J., Busching, R., et al. (2018). Many Labs 2: Investigating variation in replicability across samples and settings. Advances in Methods and Practices in Psychological Science, 1(4), 443–490.
Kousha, K., & Abdoli, M. (2010). The citation impact of Open Access agricultural research: A comparison between OA and non-OA publications. Online Information Review, 34(5), 772–785.
Langham-Putrow, A., Bakker, C., & Riegelman, A. (2021). Is the open access citation advantage real? A systematic review of the citation of open access and subscription-based articles. PLoS ONE, 16(6), e0253129.
LaPlante, D. A. (2019). Replication is fundamental, but is it common? A call for scientific self-reflection and contemporary research practices in gambling-related research. International Gambling Studies, 19(3), 362–368.
LaPlante, D. A., Louderback, E. R., & Abarbanel, B. (2021). Gambling researchers’ use and views of open science principles and practices: A brief report. International Gambling Studies, 21(3), 381–394.
Lindsay, D. S. (2015). Replication in psychological science. Psychological Science, 26(12), 1827–1832.
Loken, E., & Gelman, A. (2017). Measurement error and the replication crisis. Science, 355(6325), 584–585.
Louderback, E. R., Wohl, M. J., & LaPlante, D. A. (2021). Integrating open science practices into recommendations for accepting gambling industry research funding. Addiction Research and Theory, 29(1), 79–87.
Maxwell, S. E., Lau, M. Y., & Howard, G. S. (2015). Is psychology suffering from a replication crisis? What does “failure to replicate” really mean? American Psychologist, 70(6), 487–498.
McHugh, M. L. (2012). Interrater reliability: The kappa statistic. Biochemia Medica, 22(3), 276–282.
Mellor, D. T., Esposito, J., Hardwicke, T. E., Nosek, B. A., Cohoon, J., Soderberg, C. K., Kidwell, M. C., Clyburne-Sherin, A., Buck, S., DeHaven, A. C., & Speidel, R. (2019). Preregistration challenge: Plan, test, discover. Preprint retrieved from https://osf.io/x5w7h/
Moreau, D., & Gamble, B. (2020). Conducting a meta-analysis in the age of open science: Tools, tips, and practical recommendations. Psychological Methods. https://doi.org/10.1037/met0000351
Mu, F., & Hatch, J. (2021). Becoming a teacher scholar: The perils and promise of meeting the promotion and tenure requirements in a business school. Journal of Management Education, 45(2), 293–318.
Munafò, M. R. (2016). Opening up addiction science. Addiction, 111(3), 387–388.
Newcombe, R. G. (1998). Two-sided confidence intervals for the single proportion: Comparison of seven methods. Statistics in Medicine, 17(8), 857–872.
Norris, M., Oppenheim, C., & Rowland, F. (2008). The citation advantage of open-access articles. Journal of the American Society for Information Science and Technology, 59(12), 1963–1972.
Norris, E., He, Y., Loh, R., West, R., & Michie, S. (2021). Assessing markers of reproducibility and transparency in smoking behaviour change intervention evaluations. Journal of Smoking Cessation. https://doi.org/10.1155/2021/6694386
Nosek, B. A., Ebersole, C. R., DeHaven, A. C., & Mellor, D. T. (2018). The preregistration revolution. PNAS, 115(11), 2600–2606.
Nosek, B. A., & Lakens, D. (2014). Registered Reports: A method to increase the credibility of published results. Social Psychology, 45(3), 137–141.
Nosek, B. A., & Errington, T. M. (2020). What is replication? PLoS Biology, 18(3), e3000691.
Nutu, D., Gentili, C., Naudet, F., & Cristea, I. A. (2019). Open science practices in clinical psychology journals: An audit study. Journal of Abnormal Psychology, 128(6), 510–516.
O’Keefe, D. J. (2007). Brief report: Post hoc power, observed power, a priori power, retrospective power, prospective power, achieved power: Sorting out appropriate uses of statistical power analyses. Communication Methods and Measures, 1(4), 291–299.
Open Science Collaboration. (2012). An open, large-scale, collaborative effort to estimate the reproducibility of psychological science. Perspectives on Psychological Science, 7(6), 657–660.
Open Science Collaboration. (2015). Estimating the reproducibility of psychological science. Science, 349(6251), aac4716.
R Core Team (2019). R: A language and environment for statistical computing. R Foundation for Statistical Computing (v. 3.6.2), Vienna, Austria. https://www.R-project.org/
Rabesandratana, T. (2019). The world debates open-access mandates. Science, 363(6422), 11–12.
Scheel, A. M., Schijen, M. R., & Lakens, D. (2021). An excess of positive results: Comparing the standard psychology literature with registered reports. Advances in Methods and Practices in Psychological Science, 4(2), 1–12.
Schooler, J. W. (2014). Metascience could rescue the ‘replication crisis.’ Nature, 515(7525), 9.
Schulz, J. B., Cookson, M. R., & Hausmann, L. (2016). The impact of fraudulent and irreproducible data to the translational research crisis–solutions and implementation. Journal of Neurochemistry, 139(S2), 253–270.
Shaffer, H. J., Hall, M. N., & Vander Bilt, J. (1999). Estimating the prevalence of disordered gambling behavior in the United States and Canada: A research synthesis. American Journal of Public Health, 89(9), 1369–1376.
Tricco, A. C., Lillie, E., Zarin, W., O’Brien, K. K., Colquhoun, H., Levac, D., Moher, D., Peters, M. D., Horsley, T., Weeks, L., Hempel, S., Akl, E. A., Chang, C., McGowan, J., Stewart, L., Hartling, L., Aldcroft, A., Wilson, M. G., Garritty, C., et al. (2018). PRISMA extension for scoping reviews (PRISMA-ScR): Checklist and explanation. Annals of Internal Medicine, 169(7), 467–473.
Wallach, J. D., Boyack, K. W., & Ioannidis, J. P. A. (2018). Reproducible research practices, transparency, and open access data in the biomedical literature, 2015–2017. PLOS Biology, 16(11), e2006930.
West, R. (2020). Open science and pre-registration of studies and analysis plans. Addiction, 115(1), 5.
Wicherts, J. M. (2011). Psychology must learn a lesson from fraud case. Nature, 480(7375), 7.
Wicherts, J. M., Veldkamp, C. L., Augusteijn, H. E., Bakker, M., Van Aert, R., & Van Assen, M. A. (2016). Degrees of freedom in planning, running, analyzing, and reporting psychological studies: A checklist to avoid p-hacking. Frontiers in Psychology, 7, 1832.
Wohl, M. J., Tabri, N., & Zelenski, J. M. (2019). The need for open science practices and well-conducted replications in the field of gambling studies. International Gambling Studies, 19(3), 369–376.
Acknowledgements
We are grateful to Drs. Sarah E. Nelson and Heather M. Gray for comments on an earlier draft of the pre-registration for this study. We would like to acknowledge Alexander LaRaja and Jamie Juviler for their hard work on carefully screening studies for this project. We would also like to acknowledge Taylor Lee and John Slabczynski for their work on formatting and describing the Evidence Map, and for their coding in the exploratory analyses.
Funding
Open Access funding enabled and organized by CAUL and its Member Institutions. This research was supported primarily by a research contract between the Division on Addiction and Entain PLC (Entain; formerly, GVC Holdings PLC), with a subcontract to the University of Sydney (to Gainsbury). Entain is a large international gambling and online gambling operator. Entain had no involvement in the development of our research questions or protocol. They did not see any associated materials (i.e., retrieved studies, charted data, and manuscripts in preparation) while the study was in progress and do not have any editorial rights to any resultant manuscripts. Entain communication about this work requires approval by the Division on Addiction.
Author information
Authors and Affiliations
Contributions
Conceptualization: ERL, SMG, RMH, BJB, DAL; Methodology: ERL, SMG, RMH, BJB, DAL; Formal analysis and investigation: ERL, RMH; Writing—original draft preparation: ERL, SMG, RMH, KA, AG, DAL; Writing—review and editing: ERL, SMG, RMH, KA, AG, DAL; Funding acquisition: SMG, BJB, DAL; Resources: DAL, SMG; Supervision: ERL, DAL.
Corresponding author
Ethics declarations
Conflict of interest
When this article was published, the Division on Addiction (Division) was receiving funding from DraftKings, Inc., a sports betting and gaming company; Entain PLC (formally GVC Holdings PLC), a sports betting and gambling company; EPIC Risk Management; Foundation for Advancing Alcohol Responsibility, a not-for-profit organization founded and funded by a group of distillers; Massachusetts Department of Public Health, Office of Problem Gambling Services via Health Resources in Action; MGM Resorts International via the University of Nevada, Las Vegas; National Institutes of Health (National Institute of General Medical Sciences and National Institute on Drug Abuse) via The Healing Lodge of the Seven Nations; and Substance Abuse and Mental Health Services Administration via the Addiction Treatment Center of New England. During the past five years, the Division on Addiction has also received funding from David H. Bor Library Fund, Cambridge Health Alliance; Fenway Community Health Center, Inc.; Greater Boston Council on Alcoholism; Integrated Centre on Addiction Prevention and Treatment of the Tung Wah Group of Hospitals, Hong Kong; Massachusetts Department of Public Health, Bureau of Substance Addiction Services; Massachusetts Department of Public Health, Bureau of Substance Addiction Services via St. Francis House; the Massachusetts Gaming Commission, Commonwealth of Massachusetts; and Substance Abuse and Mental Health Services Administration via the Gavin Foundation. During the past five years, the UNLV International Gaming Institute (IGI) has received research funding from MGM Resorts International, Wynn Resorts Ltd, Las Vegas Sands Corporation, Caesars Entertainment Corporation, Ainsworth Game Technology, Sightline Payments, Global Payments, U.S.-Japan Business Council, State of Nevada, Knowledge Fund, and State of Nevada Department of Health and Human Services. IGI runs the triennial research-focused International Conference on Gambling and Risk Taking, whose sponsors include industry, academic, and legal/regulatory stakeholders in gambling. A full list of sponsors for the most recent conference can be found at https://www.unlv.edu/igi/conference/17th/sponsors. During the past five years, Dr. Eric R. Louderback has received research funding from a grant issued by the National Science Foundation (NSF), a government agency based in the United States. His research has been financially supported by a Dean’s Research Fellowship from the University of Miami College of Arts & Sciences, who also provided funds to present at academic conferences. He has received travel support funds from the Hebrew University of Jerusalem to present research findings and has provided consulting services on player safety programs for Premier Lotteries Ireland. During the past five years, Dr. Sally Gainsbury has worked on projects that have received funding and in-kind support through her institution from the Australian Research Council, NSW Liquor and Gaming, NSW Office of Responsible Gambling, Victorian Responsible Gambling Foundation, National Association for Gambling Studies, Responsible Wagering Australia, Australian Communication and Media Authority, Commonwealth Bank of Australia, National Association for Gambling Studies, ClubsNSW, Crown Resorts, and Wymac Gaming. Dr. Gainsbury has received honorarium directly and indirectly for research, presentations and advisory services from Gamble Aware, Behavioral Insights Team (UK), National Council on Problem Gambling Singapore, Star Entertainment, Australian Cricketers Association, Credit Suisse, Oxford University, ClubsNSW, Centrecare WA, Gambling Research Exchange Ontario, Crown, Community Clubs Victoria, Financial and Consumer Rights Council, Australian Communications and Media Authority, Taylor & Francis, VGW Holdings, Nova Scotia Provincial Lotteries and Casino Corporation, British Columbia Lottery Corporation, Gambling Research Australia, Responsible Gambling Trust, Clayton Utz, Greenslade, Coms Systems Ltd, Advance Gaming NZ Ltd, QBE Insurance, Department of Social Services, RSL & Services Clubs Assn, Communio, Centre of Addiction and Mental Health, North Sydney Leagues Club, Senet Compliance Academy, KPMG, New York Council on Problem Gambling, Clubs 4 Fun, Stibbe B.V., Minter Ellison, and Generation Next. The most up-to-date listing of funders and recent grants is available on her University website: https://www.sydney.edu.au/science/about/our-people/academic-staff/sally-gainsbury.html. During the past five years, Dr. Robert Heirene has received funding and in-kind support from Responsible Wagering Australia (funding awarded to Gainsbury) and its online wagering site members. During the past five years, Karen Amichia has had no financial interests or funding to disclose. During the past five years, Alessandra Grossman has had no financial interests or funding to disclose. During the past five years, Dr. Bo J. Bernhard has received research funding from the U.S.-Japan Business Council, Wynn Resorts, Atomic 47/ePlata Banking, Las Vegas Sands, the Nevada Department of Health and Human Services Governor's Advisory Panel on Problem Gambling, the State of Nevada Knowledge Fund, and MGM Resorts International. He has received honoraria and/or travel support from ClubsNSW, Aristocrat Technologies, the Evergreen Council on Problem Gambling, the National Council on Problem Gambling, IGT, Sundreams (Chile), AGS, the US-Japan Business Council, Global Market Advisors, Jewish Family and Community Services, and the British Columbia Lottery Corporation. During the past five years, Dr. Debi A. LaPlante has served as a paid grant reviewer for the National Center for Responsible Gaming (NCRG; now International Center for Responsible Gaming [ICRG]), received travel funds, speaker honoraria, and a scientific achievement award from the ICRG, has received speaker honoraria and travel support from the National Collegiate Athletic Association, received honoraria funds for preparation of a book chapter from Université Laval, received publication royalty fees from the American Psychological Association, and received course royalty fees from the Harvard Medical School Department of Continuing Education. Dr. LaPlante is a non-paid member of the New Hampshire Council for Responsible Gambling and the Conscious Gaming advisory board.
Ethics Approval and Informed Consent
This article was a scoping review of existing research and therefore does not constitute Human Subjects research, so it did not require ethics approval.
Additional information
Publisher's Note
Springer Nature remains neutral with regard to jurisdictional claims in published maps and institutional affiliations.
Supplementary Information
Below is the link to the electronic supplementary material.
Rights and permissions
Open Access This article is licensed under a Creative Commons Attribution 4.0 International License, which permits use, sharing, adaptation, distribution and reproduction in any medium or format, as long as you give appropriate credit to the original author(s) and the source, provide a link to the Creative Commons licence, and indicate if changes were made. The images or other third party material in this article are included in the article's Creative Commons licence, unless indicated otherwise in a credit line to the material. If material is not included in the article's Creative Commons licence and your intended use is not permitted by statutory regulation or exceeds the permitted use, you will need to obtain permission directly from the copyright holder. To view a copy of this licence, visit http://creativecommons.org/licenses/by/4.0/.
About this article
Cite this article
Louderback, E.R., Gainsbury, S.M., Heirene, R.M. et al. Open Science Practices in Gambling Research Publications (2016–2019): A Scoping Review. J Gambl Stud 39, 987–1011 (2023). https://doi.org/10.1007/s10899-022-10120-y
Accepted:
Published:
Issue Date:
DOI: https://doi.org/10.1007/s10899-022-10120-y