1 Evidence-based management and meta-analyses

Since its conception, evidence-based management (EBMgt) has been focused on using the best available evidence to inform decision-making in management practice (Rousseau 2006) as there is still a certain reluctance to base management decisions on scientific knowledge (Briner and Rousseau 2011). While the role of meta-analysis as a cumulative approach was always valuable, there is a recent discussion about the reproducibility of findings in management research (Bergh et al. 2017). In this regard, scholars noted that a first step towards more valid and generalizable evidence in organizational psychology and management is the systematic use of research syntheses (Briner and Rousseau 2011). The availability of systematic reviews and meta-analyses also facilitates the collection and assessment of relevant information on the side of practitioners and can thus play a crucial role to bridge the gap between research and organizational practice (Le et al. 2007).

While meta-analyses in management research and organizational sciences have always experienced a strong consideration and despite some well-received guidelines (Geyskens, et al. 2009), descriptions of methodological procedures are often inadequate and reporting non-transparent (Aytug et al. 2012), which may limit their validity and replicability. Thus, we argue that the collaborative effort of cumulative collection, transparent publishing, assessing, and updating evidence in management science and organizational psychology can be improved to foster evidence based decision-making. In the field of psychology, the platform PsychOpen CAMA (https://cama.psychopen.eu/) has been created as tool that enables the research community to curate and update meta-analyses. CAMA is a tool for psychology and related fields and can also be of interest for management researchers doing research with a focus on psychological concepts.

From this perspective, CAMA should be of interest for the field of human management with a focus on human behavior in social institutions and organizations (Nicholson 1998). An apparent example is Human Resource Management, that aims at diagnosing and developing competencies (Schaper 2004) and to effectively manage the available human resources within an organization (Combs et al. 2006). A popular topic in the field of personnel psychology are personal traits of the entrepreneur, as the Big Five (Zhao and Seibert 2006) and risk taking behavior (Stewart and Roth 2001), and their relevance for entrepreneurial success. For long-term performance and development of organizations, reacting to changing environments, learning and adaptation processes have to be considered (Chang et al. 2014).

In the present article, we present and overview the principles and functionalities of PsychOpen CAMA. By doing so, we hope to increase the awareness of its potential use in management. In the following, we will discuss requirements for open, reproducible, and current evidence. We start with discussing the concept of FAIR (Findable, Accessible, Interoperable, and Re-usable) data, the necessity of evidence-aggregating infrastructures and consider the rapid accumulation of research findings in recent years. We introduce community-augmented meta-analysis (CAMA) as a concept to account for open data policies and systematic and efficient updating at the same time. In this regard, we present the platform PsychOpen CAMA. To illustrate its functionalities, we will demonstrate its procedures using an empirical example of a recently published meta-analysis on gender differences in the intention to start a business (Steinmetz et al. 2021). Necessary steps to publish a meta-analysis on the platform are explained, and available outputs on the user interface are demonstrated. Finally, benefits and technical and methodological limitations are discussed.

2 Open, reproducible and extendable meta-analyses

2.1 FAIR meta-analytic data and infrastructures

Despite the fact that meta-analytic data is typically extracted from published primary studies and consists of study characteristics and summarized outcomes that are not subject to data protection concerns, meta-analyses in organizational sciences often fail to meet common standards for transparent reporting (Schalken and Rietbergen 2017). As a response to this, Lakens et al. (2016) argue for open meta-analytic data to make meta-analyses dynamic and reproducible. However, open data is not sufficient. Haddaway (2018) calls for open synthesis, which is the application of open science principles to evidence synthesis. In their practical guide on how to conduct meta-analyses, Hansen et al. (2022) explicitly recommend the use of open science reporting practices to allow validation of results and the use of already coded data in subsequent meta-analyses to contribute to cumulative science. Therefore, various templates, open science repositories, and even dynamic systems, including PsychOpen CAMA are mentioned.

According to the principles of the Open Science Movement (Kraker et al. 2011), next to coded data, open syntheses should provide information on the methodology in sufficient detail to allow verification and replicability, open programming code and tools, as well as open access to all relevant information. This would allow the research community to replicate, re-use, and update meta-analyses more efficiently and prevent ambiguities and questionable practices in research, as literature selection and data collection could rely on sufficient information on previous work. Research infrastructures are needed to facilitate the accumulation of evidence by fostering FAIR data sharing. FAIR data support the readability of data both for machines and for humans (Schultes and Wittenburg 2019).

According to the FAIR principles, evidence syntheses require findability and accessibility of the data to optimize decision-making. To serve the purpose of providing information in practical contexts, the comprehensibility of results is highly relevant. A graphical user interface (GUI) providing visualizations of meta-analytic results including interpretation aids can enable users without proficient knowledge to get an overview on the evidence on a research question (Bosco et al. 2015). Plain Language Summaries (PLS) giving a summary of the existing evidence, in the tradition of Cochrane reviews (Langendam et al. 2013), can complement the GUI to make scientific knowledge accessible for decision-makers and the public.

The aim of interoperability is to enable machines and technical tools to understand and process new data automatically (Nilsson 2010). This can be achieved by common standards and consistency in data and metadata structures. Ideally, data can be represented in a simple and reusable structure and metadata describe the characteristics of a dataset (González Morales and Orrell 2018). Especially for evidence syntheses, interoperability is highly relevant, as it facilitates and accelerates efficiently accumulating evidence for a timely integration of new research findings in synthesized evidence. Therefore, a basic template for meta-analytic data and a set of metadata to describe the characteristics of a meta-analytic dataset should be used as a foundation for interoperability.

For researchers interested in replicating meta-analyses or re-using data from a published meta-analysis, data access and a thorough documentation of the underlying methodology is crucial (Aguinis et al. 2011). Available analysis scripts and technical tools facilitate the replication of published results. Additionally, existing data resources can be used for novel purposes, as subgroup analyses, or the modification of methodological decisions, such as estimation method or selection of moderator variables in the model (Lakens et al. 2016). Data infrastructures adhering to the FAIR data principles thus have the potential to improve the efficiency of collaborative evidence collection and at the same time, increase the usability and accessibility of information for decision-makers and the public.

2.2 Cumulative evidence collection and updating

Beyond the requirements of FAIR data and enabling infrastructures, meta-analyses are only valid for a specific period of time (Créquit et al. 2016). Without additional electronic material, a meta-analysis represents the cumulative evidence on a research question up to a certain point in time and may quickly become outdated as soon as new findings from primary studies are published or new methodological or statistical procedures are developed (Shojania et al. 2007). An example that demonstrates how important open meta-analytic data and frequent updates are, both in terms of efficient evidence accumulation, as well as concerning the validity and timeliness of meta-analytic results, is the replication and extension of a meta-analysis on family firm innovation (Block et al., 2022). As the meta-analytic data of the prior meta-analysis was not available, only 104 of the original 108 studies could be gathered and the data had to be coded from scratch. The replication at the same time was the basis for a thorough extension. The study sample was updated and a new methodological approach was used. This had a significant impact on the results and changed the overall conclusion of the meta-analysis, underlining the relevance of the update.

For systematic reviews, an update is defined as a new edition of a published review. It can include new data, new methods, or new analyses. An update is recommended if the topic is still relevant and new methods or new studies have emerged that could potentially change the findings of the original review (Garner et al. 2016). For example, Cochrane reviews are required to be updated every two years (Shojania et al. 2007) and Campbell reviews within five years (Lakens et al. 2016). Shojania et al. (2007) assessed the survival time of 100 reviews and conclude that within two years, almost one-fourth of the reviews was already outdated. Créquit et al. (2016) examined the proportion of available evidence on lung cancer not covered by systematic reviews between 2009 and 2015 with the finding that, in all cases, at least 40% of treatments were missing. As the number of publications is continuously growing (Bastian et al. 2010), we can expect survival times of reviews to become even shorter.

The ongoing accumulation of evidence informs researchers about the latest findings in a specific research area, for example, when the results are robust enough to no longer justify further research investment, at least without taking into account existing results and perhaps specific research gaps. A systematic review of cumulative meta-analyses (Clarke et al. 2014) reports many illustrating examples, speaking for the high relevance of cumulative research to enable more informed decisions and at the same time a more efficient distribution of research funds and efforts.

Above the ongoing accumulation and synthesis of evidence, the next goal is to effectively publish cumulative meta-analytic evidence to facilitate cumulative research synthesis and provide the best evidence for practical decision-making. The key challenge for the publication of meta-analyses, therefore, is to make the preexisting research reproducible and to allow updating meta-analyses by reusing the information that has been collected up to the point of the most recent meta-analysis to keep pace with the continuous publication of research findings. To transform meta-analyses into transparent and dynamic resources, data and methods have to be reported in a standardized and open manner. In addition, programming code, interactive tools on a GUI and PLS can improve the accessibility and comprehensibility of the accumulated evidence. In the following, a concept for a publication format that enables reproducible and dynamic meta-analyses will be presented.

2.3 The concept of community-augmented meta-analyses

Actually, a concept to approach a publication format for comprehensive, dynamic, and up-to-date evidence synthesis already exists. Community-augmented meta-analysis (CAMA; Tsuji et al. 2014), is a combination of an open repository for meta-analytic data and an interface offering meta-analytic analysis tools. Depending on their focus, the conceptualiztion and labeling of systems underlying similar ideas differs across scholars. For example, Créquit et al. (2016) call for living systematic reviews, that is, high-quality online summaries, that are continuously updated. Similarly, Haddaway (2018) proposes open synthesis. Some authors also speak of dynamic (Bergmann et al. 2018), or cloud-based meta-analysis (Bosco et al. 2015). Braver et al. (2014) describe an approach called continuously cumulating meta-analysis (CCMA) to incorporate and evaluate new replication attempts to existing meta-analyses.

The basis of a CAMA system, as shown in Fig. 1, is the data repository, where meta-analytic data contributions from researchers in specific research areas are stored. It serves as a dynamic resource and can be used and augmented by the research community to keep the state of research updated and accumulate knowledge continuously. Tools to replicate and modify analyses with these data are accessible via an open web-based platform, usually encompassing a graphical user interface. For example, examining moderator effects beyond the analyses presented in the original meta-analysis may be conducted. The available evidence from the meta-analyses archived in a CAMA can also be used to improve study planning. Estimates of the expected size of an effect can serve as input for power analyses. The examination of possible relevant moderators can help to identify research gaps and guide the design of new studies (Tsuji et al. 2014).

Fig. 1
figure 1

Collection of and access to research findings in a CAMA system

The meta-analytic data for the CAMA system have to follow certain standards to ensure interoperability with the functionalities on the GUI. That means, that analysis outputs can be requested by users and are automatically available on the GUI for each dataset, as the underlying analysis functions understand the standardized data. The platform thus serves as a dynamic resource enabling the research community to keep the state of research updated and accumulate knowledge continuously by providing a common language for the data. The role of the infrastructure provider is to set the standards for the data submitted to the repository and to store the data according to these standards. To sum up, a CAMA adheres to the requirements of FAIR data by making research results findable, complete datasets accessible, ensuring interoperability of data and analysis scripts, and thus, making data reusable (Wu et al. 2019).

3 A platform for meta-analyses in organizational psychology

3.1 PsychOpen CAMA as platform for meta-analyses in psychology

A tool that serves the psychological research community by encompassing meta-analyses of different study types and in different areas of psychology is PsychOpen CAMA. The service renders meta-analytic findings easily accessible, re-usable, and expandable for the research community (Burgard et al. 2022). It is provided by the Leibniz Institute for Psychology (ZPID), a Public Open Science Institute for psychology.

PsychOpen CAMA serves as an open repository for meta-analytic data and provides basic analysis tools (Tsuji et al. 2014), such as typical graphical devices and multilevel meta-regressions. The basic system has already been tested by the first data providers of the current datasets (e.g., Bucher et al. 2020) and is freely available to the research community. Currently available features include a data overview with summary statistics, some graphs for data exploration, and basic meta-analytic outputs such as forest plots, funnel plots, and meta-regression. A responsive interface allows to account for dependencies in the data by using multilevel models (van den Noortgate et al. 2013), as well as examine potentially relevant moderator variables. Furthermore, advanced meta-analytic tools such as p-curve analyses and power estimates are available to draw conclusions for further study planning or about the reliability of the meta-analytic evidence.

The basic system of PsychOpen CAMA is depicted in Fig. 2. Meta-analytic data are standardized according to a template and stored in a self-managed R package including generic meta-analytic functions. These functions can understand and analyze the datasets using metadata. The functions for the meta-analytic calculations and visualizations in the self-managed R package are mainly based on the R package metafor (Viechtbauer 2010). The self-managed package is accessible and documented in a Git repository under a GPL-3.0 license: https://github.com/leibniz-psychology/PsychOpen-CAMA-R-package.

Fig. 2
figure 2

Architecture of PsychOpen CAMA

In the web application, users can choose a dataset and request an analysis output. The requests are forwarded to an OpenCPU Server (https://www.opencpu.org/) server, where the analyses are executed using the data and functions from the R package. The execution of the analyses on the OpenCPU server ensures scalability of the application, which is of particular relevance for a research infrastructure that covers a wide range of possible research areas and potentially reaches many users. The resulting outputs from the analyses are embedded in the web application and thus displayed to the user.

PsychOpen CAMA allows researchers to use data available on its website, either by replicating meta-analytic results in the application, or by downloading data for further analyses. For this purpose, the standardized datasets for PsychOpen CAMA are available under a CC-BY 4.0 license in PsychArchives (https://www.psycharchives.org/). For tracking use of PsychOpen CAMA and corresponding data, users have to adhere to the citation policy. Researchers publishing or presenting work using data in PsychOpen CAMA must cite the original publications of the corresponding datasets, as well as a publication on PsychOpen CAMA (Burgard et al. 2022).

There is a similar project in the field of management and applied psychology called metaBUS (www.metaBUS.org). The main differences between the two systems are quickly outlined in the following. MetaBUS is an open search engine based on a hierarchical taxonomy of the field and provides a database consisting of correlations between clearly defined concepts within this taxonomy (Bosco et al. 2020). This approach differs substantially from PsychOpen CAMA that is based on single meta-analyses. The effect size of interest in metaBUS is correlations. These are collected by a semi-automated matrix extraction protocol with trained coders supervising this process. For PsychOpen CAMA, it is planned to make use of crowdsourcing and synergies with other ZPID services. MetaBUS relies exclusively on trained and paid coders, as crowdsourcing efforts have not paid off yet due to the difficulty to motivate and train potential collaborators (Bosco et al. 2020). In contrast to the architecture of PsychOpen CAMA with a server and a web application, MetaBUS relies on the R Shiny for the graphical user interface (Bosco et al. 2015).

3.2 Practical example with data from organizational psychology

PsychOpen CAMA currently includes 21 datasets (March 2022). In the following, we will demonstrate the platform in detail by using one of these datasets primarily analyzed and published in a meta-analysis focusing on gender differences in the intention to start a business (Steinmetz et al. 2021) based on the theory of planned behavior (TPB; Ajzen 1991). The codebook and data are available in PsychArchives: https://doi.org/10.23668/psycharchives.5264. The outputs presented in the following are easily repeatable in the user interface of PsychOpen CAMA: https://cama.psychopen.eu/inspection/CAMA_Business.

In the original meta-analysis, correlational data from 119 reports including 129 unique samples were collected. The effect size of interest were correlations between gender and TPB variables or among TPB variables. Above that, secondary data on cultural dimensions and economic data on the respective time and place of the studies were matched to the correlational data. Multilevel Random Effects Meta-Analyses were conducted for all bivariate correlations of gender and the four TPB variables. Furthermore, a MASEM (meta-analytic structural equation model) was specified and computed. The relevance of the cultural and economic context for gender differences in the intention to start a business were assessed by regressing the respective correlations on cultural and economic variables.

The functionalities in PsychOpen CAMA do not support MASEM up to now. Therefore, the data have been restricted to correlations of TPB constructs with gender, resulting in 70 studies with 205 effect sizes. The data was standardized according to the general template of datasets for PsychOpen CAMA. The resulting CAMA dataset is described in Table 1. For illustrative reasons, it is restricted to a sparse set of variables. The IDs for the report, the study, and the outcomes are used to represent the hierarchical structure of the data. The 205 effect sizes from the dataset for PsychOpen CAMA stem from 68 reports including 70 unique studies. The reports were published between 1996 and 2019 and only 13 effect sizes are derived from reports that were not published in a peer-reviewed journal. The effect size of interest are correlations. For the meta-analytic calculations, the sample size and the variance corresponding to each effect size is also given.

Table 1 Description and summary statistics of the CAMA dataset on Gender and TPB constructs

The dataset contains correlations between gender and four TPB constructs, namely attitude (47 correlations), intention (65 correlations), perceived behavioral control (61 correlations), and subjective norm (32 correlations). The distribution of the correlations for each of these constructs can be displayed in PsychOpen CAMA under “Data exploration”, where the grouped violinplots (Fig. 3) can be requested. For each subgroup according to the categorical variable on the x-axis, the distribution of the effect size is depicted. The horizontal lines within each violinplot divide the outcomes into five quintiles. For example, regarding the correlations between gender and intentions it can be concluded that only the highest quintile of the correlations is positive and that about 20% of the correlations are below -0.2. The same type of output is derived when selecting the country of study conduction, the cultural cluster, or the type of sample.

Fig. 3
figure 3

Screenshot of violinplots for the correlations between gender grouped by TPB construct

Choosing continuous moderator variables results in scatterplots with the correlation on the y-axis and the moderator of interest on the x-axis. If two continuous moderators are selected, a scatterplot matrix is plotted. Figure 4 displays an example for a scatterplot matrix including the correlation between gender and a TPB construct, publication year, and mean age of the sample. The association between the correlation and publication year is positive, suggesting that more recent studies report more gender equality concerning the TPB constructs. Mean age of the sample and correlations are negatively related, meaning that older samples provide less egalitarian outcomes.

Fig. 4
figure 4

Screenshot of the scatterplotmatrix of correlation, publication year, and mean age

In PsychOpen CAMA, multilevel random effects meta-analyses on the correlations can be conducted. The interface also allows to select up to two moderators for a meta-regression model. In Fig. 5, the results of a meta-regression model, in which the correlations were regressed on which TPB construct they concerned as well as publication year, are depicted. In the multilevel model, the variation attributed to each analysis level is estimated. Thus, the output reports the estimated variance and the corresponding standard error between the 70 studies and within the studies. The test for residual heterogeneity tests the null hypothesis, that the underlying true effect size parameters are the same in all studies included, meaning that the variation between the effect sizes is only due to sampling variance. The statistical significance of the test statistic Q means that the null hypothesis is rejected and statistical heterogeneity is expected.

Fig. 5
figure 5

Screenshot of the multilevel meta-regression model

The model results provide the meta-analytic estimates. For the intercept, this is the estimated weighted mean of the correlation between attitudes and gender for the mean publication year. As gender was coded 0 for males and 1 for females, these results indicate a lower attitude towards starting a business for females. The estimates for the other three TPB constructs differ significantly from the estimate for attitudes. Furthermore, a more recent publication year increases the correlation implying that females become more inclined to start a business over time.

Next to the model results, basic meta-analytic graphical displays are provided in PsychOpen CAMA. One of these, the contour-enhanced funnel plot, is depicted in Fig. 6. It considers the statistical significance of the outcomes for the evaluation of potential publication bias. The contour-enhanced funnel is centered at 0 and the differently colored regions indicate levels of statistical significance. Findings within the white region are not significant and thus, an asymmetry in this region would indicate potential publication bias, as small studies with non-significant results are expected to remain unpublished (Peters et al. 2008). The funnel plot in Fig. 6 does not provide evidence for a publication bias.

Fig. 6
figure 6

Sreenshot of the contour-enhanced funnel plot

Whereas in the original study, a meta-analytic structural equation model (MASEM; Cheung 2015) was used to examine the specific effects of attitudes, subjective norms, and perceived behavioral control on the intention to start a business, this analysis cannot be represented in PsychOpen CAMA. The data structure and analysis tools of PsychOpen CAMA do not enable the replication of a MASEM in the user interface. A web application for one-stage MASEM already exists: webMASEM (Jak et al. 2021). To implement functionalities for the application of MASEM in PsychOpen CAMA, specific data templates and the inclusion of specific analysis function in the R package of PsychOpen CAMA would be needed.

Despite methodological limitations, PsychOpen CAMA provides features and opportunities that go beyond a printed article. There is a study planning tool providing a power plot for a hypothetical further study presuming the meta-analytic estimate as the true underlying effect size. It allows to estimate the necessary sample size for a desired level of statistical power. The implementation and publication of the meta-analytic data also facilitates further extension of the dataset. It can be downloaded from PsychArchives, and using the corresponding codebook, new data can be added to the existing dataset and resubmitted to PsychArchives.

4 Benefits and limitations of PsychOpen CAMA

To conclude, PsychOpen CAMA provides a platform for psychological meta-analyses to make data and analyses easily accessible, re-usable, and expandable. It adheres to the FAIR data principles improving the potential for collaborative evidence collection and the usability and accessibility of information for decision-makers and the public. As such, it can also serve as a resource to foster the use of meta-analytic evidence in organizational psychology.

The practical example from organizational psychology demonstrates the usefulness of publishing meta-analytic data in PsychOpen CAMA to make it accessible and usable. However, PsychOpen CAMA is not yet suitable for advanced meta-analytic methods. Therefore, the MASEM analyses from the original study could not be replicated with PsychOpen CAMA. Further methodological extensions, such as network meta-analyses (Nikolakopoulou et al. 2018) or the use of available individual level data within meta-analyses (Pigott et al. 2012) would be desirable in the future.

Another limitation of PsychOpen CAMA is the automation of data collection and extraction to extend meta-analytic evidence. The continuous maintenance of the data repository is labor-intensive. Crowdsourcing could be a solution (McCarthy and Chartier 2017). Yet, it depends on the willingness of the research community to provide relevant data in the desired format. The goal is to support users in the submission of data and automatize repetitive processes as far as possible. However, at least for the monitoring of these processes, plausibility checks, and necessary corrections in case of erroneous entries, manual effort cannot fully be replaced. The long-term goal of PsychOpen CAMA is to keep pace with the publication of scientific results, at least concerning some domains and hot topics in psychology.