In the subsections below, we present the results of the systematic literature review and the findings from the empirical research into current impact assessment practices of citizen science projects. These are followed by a discussion of the combined insights which we present as a set of guiding principles for a consolidated CSIAF.
Results of the systematic literature review
Each of the reviewed publications considers one or more of the five impact domains, namely; Society, Economy, Environment, Science and Technology, and Governance (see Fig. 2). The only exceptions were two publications which focus on generic impact assessment approaches instead of specific impact domains (Jacobs et al. 2010; Reed et al. 2018). The two publications with generic impact assessment approaches are not included in the subsequent domain-specific analysis. A detailed overview of the relevance of the reviewed publications per domain is presented in Table in supplementary material. The majority of the reviewed approaches focus on measuring impacts in 1 or 2 domains (32 and 19, respectively); only 2 out of the 77 reviewed publications referred to all 5 domains (Gharesifard et al. 2019a, b).
As is evident from Fig. 3, the reviewed literature addresses the five impact domains at distinctly different levels of intensity, with the largest number of publications (n = 65) in the society impact domain and the lowest in the economy domain (n = 12).
The review also captured whether a publication focused on measuring impacts at different levels of abstraction, namely thematic level insights or with concrete indicators. Insights at the thematic level here refer to identification of different themes (or areas of application) within each domain. For example, Ballard et al. (2017) discuss science-related outcomes in biodiversity research and Cook et al. (2017) focus on science-related outcomes in the theme of participatory health research, but neither of them provide indicators for measuring these. In contrast, Jordan et al. (2012) provide specific indicators for measuring science-related results of citizen science projects within the theme of ecological monitoring, for example, short or longer term changes in understanding of natural systems or number of peer-reviewed publications. As illustrated in Fig. 4, except for the two generic publications (see Fig. 2), all other publications in each domain provide insights at the thematic level, in contrast, a far smaller number of publications in the same domain offer insights at the indicator level.
The largest share of the reviewed publications did not include evidence and supporting material of measured baselines situation, outcomes and/or impacts (e.g., a supplementary material). The 12 notable exceptions (out of 77 papers in total) are Bremer et al. (2019), Gharesifard et al. (2019a), Grudens-Schuck and Sirajuddin (2019), Guldberg et al. (2019), Hassenforder et al. (2016), Haywood (2015), Hobbs and White (2016), Khodyakov et al. (2013), Merenlender et al. (2016), Trimble and Lazaro (2014), Wehn et al. (2019b, 2020a, b, c).
In the society domain, there is a general distinction in the reviewed literature between (1) individual and collective level outcomes and (2) changes in knowledge, attitude and behaviour. One key theme relates to (individual and social) learning outcomes. Other salient themes relate to changes in relationships and partnerships among societal actors, community dynamics (including capacity, well-being and livelihoods) and changes in the understanding of and attitudes towards science, which provide cross-cutting links to the science domain. In the society domain, 31 publications provided specific indicators (Fig. 4). Examples include:
-
Indicators of community participation (Butterfoss 2006, p. 331):
-
o
“diversity of participants/organisations
-
o
recruitment/retention of (new) members
-
o
role in the community or its activities
-
o
number and type of events attended
-
o
amount of time spent in and outside of community activities
-
o
benefits and challenges of participation
-
o
satisfaction with the work or process of participation
-
o
balance of power and leadership”;
-
Indicators of science inquiry skills of participants in citizen science initiatives (Philipps et al. 2018; p. 9):
-
o
“asking and answering questions
-
o
showing increased confidence in being able to collect data
-
o
collecting data
-
o
submitting data
-
o
developing and using models
-
o
planning and carrying out investigations
-
o
reasoning about, analysing, and interpreting data
-
o
constructing explanations
-
o
communicating information
-
o
using evidence in argumentation”
The themes and indicators in the science and technology domain focus on largely quantifiable outputs of the scientific process (e.g., data, publications and citations). Some approaches (Kieslinger et al. 2017; 2018; Chandler et al. 2017) capture changes to the scientific process via public participation and community engagement, changes in community-academia relations and enhancements of the scientific knowledge base 16 publications contributing to the science and technology domain provide indicators (Fig. 4). For example, Kieslinger et al. (2018; pp. 88–92) propose indicators in the form of closed questions, such as
-
o
“Does the project demonstrate an appropriate publication strategy?
-
o
Are citizen scientists recognised in publications?
-
o
Did the project generate new research questions, projects or proposals?
-
o
Did the project contribute to any institutional or structural changes?
-
o
Does the project ease access to traditional and local knowledge resources?
-
o
Does the project contribute to a better understanding of science in society?”
Chandler et al. (2017; p. 172) suggests indicators such as the number of
-
o
“people and person hours dedicated to collecting scientific data,
-
o
popular publications and outreach events”
The themes in the environmental domain focus on the status of environmental resources, e.g., resulting from conservation efforts, ecosystem functions, services and resilience, as well as impacts of environmental status on human health and livelihoods (cutting across to the society domain) and outcomes for agricultural productivity (cutting across to the economy domain). Indicators were identified in ten of the publications relating to the environment domain, such as
-
o
“improved conservation action leading to better ecosystem function, ecosystem services and resilience” (Pocock et al. 2018; p. 278)
-
o
“enhanced natural habitats and ecosystem services” (Chandler et al. 2017; p. 172).
The themes in the economy domain cover demand and supply aspects of citizen science, including the generation of economic entrepreneurial activities. While the total number of contributions in this domain is already small (n = 12), out of these, only six publications actually provide concrete indicators. Indicators on the demand side include
-
-
o
“number of jobs created” (Jordan et al. 2012; p. 308)
-
o
“added value of citizen science data
-
o
change in company growth
-
o
international trade and investment” (Wehn et al. 2017; 36).
The contributions in the governance domain cover a wide range of themes, including the policy cycle, as well as actual changes in policy, multi-level interactions among actors and their power dynamics, communication, relationships and trust. Most contributions highlight relevant themes and only ten publications provide specific indicators. For example,
-
-
o
“contributions to management plans and policy” (Chandler et al. 2017; p. 172)
-
o
“stakeholder interactions in decision-making processes (e.g., data provision, expressing preferences, deliberation and negotiation, etc.)” (Wehn et al. 2017; p. 34)
-
o
“change in the level of authority and power off each stakeholder” (Wehn et al. 2017; p. 35)
Along with the definition of indicators, the reviewed literature describes guidelines on how to collect evidence of impact in each domain. The analysis of the methodological approaches used or referred to reveals that a mixed methods approach (qualitative and quantitative) is by far the most commonly proposed (discussed in > 70% of publications reviewed) approach for capturing impacts of citizen science in the different domains (Fig. 5). The highest percentage of quantitative impact assessment approaches were recorded in the science and technology, and the society domains (Fig. 5); these were the domains with the highest number of papers with specific indicators (Fig. 4). This could be because these two impact domains are frequently assessed in citizen science projects. However, overall, there is a low percentage (< 8%) of quantitative methods used in all five domains (Fig. 5); this could be because of the difficulties with quantifying the impacts of citizen science. The methods used include (and often combine) observations, (semi)structured interviews, questionnaire-based surveys, generating data from document analysis via checklists, gathering data from a variety of stakeholders (including non-participants) to capture the diversity of views about the baseline situation (even in retrospect) and evolving outcomes and impacts at multiple times throughout the project.
The review of 77 impact assessment publications highlights that currently there are no standardised guidelines for assessing citizen science impact, and there is an imbalance in the domains in which citizen science impact is assessed (only 2 out of 77 publications reviewed covered all impact domains). Therefore, there is a need to build on the insights from existing impact assessments and develop a guiding framework that is able to address and navigate the complexity of measuring the impacts of citizen science across all five impact domains.
Empirical evidence of current impact assessment practices
The results of the empirical enquiry among citizen science project coordinators are summarised in Table 3. The Code System column presents the identified insights from qualitative analysis of the interviews. These insights are categorized in five groups, namely; purpose of impact assessment, method of impact assessment, impact indicators for, impact domains and challenges of impact assessment. The Coded Segments column shows the number of times that the coded insights appeared in all 11 interviews, while the ‘Number of Interviews’ corresponds with the number of project coordinators who referred to each coded insight in their responses. The reasons (or purposes) for citizen science impact assessment varied from justifying the project during the proposal stage; increasing levels of insight generation later in the project, whether for personal/internal purposes (e.g., learning); helping promote the citizen science initiative; accounting or reporting (e.g., to funders or financial accountants); or even for improving project activities and the attainment of envisaged results and impacts via adaptive management (project evaluation and improvement). Accounting/reporting was the dominant reason (coded ten times across eight of the interviews) for measuring impact in the different citizen science projects (Table 3).
Table 3 Coded results of interviews with citizen science project coordinators (highlighted rows indicate the aspects with the highest frequency of occurrence) The interview results indicate a range of methods for collecting evidence of impacts are used, differing in terms of timing of the methods’ application in different project stages (e.g., ex-ante impact assessment before either the start of the project or the hands-on citizen science activities on the ground), as well as in terms of structuring and capturing impacts (e.g., capturing narrative impact stories vs structured surveys or interviews with a range of stakeholders) and focus of analysis (e.g., focus on actors’ perspectives, or analysing the usage of citizen science tools). Surveys, interviews and feedback forms were the most commonly mentioned form of impact assessment mentioned 12 times across nine of the interviews (Table 3).
The impact indicators mentioned by the interviewed citizen science practitioners reflect some blurring of definitions or distinctions of terminology, e.g., referring to number of data points collected (arguably these are outputs, not impacts). Nevertheless, the responses indicate the broad range of impact indicators in use, which include not only cognitive changes in awareness of the topic that is the focus of a citizen science initiative, but also changes in attitudes, actions and policy.
Notably, the five impact domains were confirmed as relevant, albeit to differing degrees by the respective respondents. No additional domains were suggested. Similar to the 77 publications reviewed, the impact domains of science and technology and society, had the highest coding and were mentioned in > 45% of the interviews with practitioners. Finally, a number of challenges for undertaking impact assessments of their citizen science projects were identified, relating to the well-known dilemma of misalignment in terms of timing of funded project activities versus the (longer term) manifestation of envisaged (and observable) impacts; difficulties associated with collecting data about impacts; project priorities limiting the attention to impact assessment activities; lack of competencies to undertake sound impact assessment among project partners; and unavailability of resources.
Discussion
The analysis of the results presented in Sect. 3.1—especially the strengths, weaknesses and lessons learned from the application of citizen science impact assessment approaches—as well as the empirical evidence from citizen science projects presented in Sect. 3.2, generate a number of salient insights which we combine here into six guiding principles for a consolidated Citizen Science Impact Assessment Framework (CSIAF). Specifically, these guiding principles refer to the purpose of assessing impact in the context of citizen science, the conceptualisation of data collection methods and information sources for impact assessment, the distinction between relative impact versus absolute impact, the comparison of impact assessment results across citizen science projects, and the incremental enhancement of the organising framework over time. Below, we list the six principles to inform a consolidated CSIAF which, we hope, can serve citizen science practitioners (e.g., project coordinators, community managers) and impact researchers alike.
Putting these principles into practice to compose a consolidated CSIAF will involve the careful comparison, alignment and (if appropriate) combination of relevant indicators per domain and theme, along with the selection of data collection methods to capture evidence of (emerging) impacts. The framework will be implemented as an online resource and tool via a dedicated effort of the MICS projectFootnote 2 and rolled out to citizen science initiatives in Europe and globally during 2021.
Principle 1: Acknowledging a variety of purposes of citizen science impact assessment
The reasons for the impact assessment of citizen science projects differ from impact reporting to learning for improved (future) implementation and even ex-ante impact assessment to substantiate proposal and grant applications and capture baselines. Thus, the CSIAF needs to be able to accommodate a range of reasons, purposes and timing of undertaking impact assessment within citizen science projects. This requires projects to consider both, process-related as well as results-related indicators (Haywood and Besley 2013; Ravn et al. 2016; Wehn et al. 2020c)Footnote 3. Benchmarks and feedback on the extent to which and how envisaged results are and can be achieved are also recommended and can feed into the adaptive management of projects. At the moment, although some of the 77 reviewed publications highlight the role of evaluation in adaptive project management (e.g., Kieslinger et al. 2017; Wehn et al. 2017, 2020a, b, c), most do not provide explicit examples of projects that have changed or adjusted their strategies based on assessing impacts during the lifetime of the project.Footnote 4
Principle 2: Non-linear conceptualisation of impact journeys to overcome impact silos
The intervention logic (also known as results chain or logical framework approach) is behind many impact assessment efforts of public interventions and—in particular—the assessment of research activities, namely the MoRRI framework (Monitoring Responsible Research & Innovation RRI) (Ravn et al. 2016) as well as evaluations of citizen science efforts (e.g., DITOS Consortium 2016). The definitional system of the logic framework in terms of outputs, outcomes and impacts provides useful distinctions for the different results emerging before eventual impact is achieved. Nevertheless, its inherent linear conceptualisation and generic set definitions are limiting, offering too little guidance on the changes related to citizen science. This can result, among others, in ‘impact silos’, i.e., lack of awareness of other relevant types of impacts.
Moreover, evidence from citizen science impact assessments has shown that impact journeys ‘zigzag’ across multiple domains, i.e., there are dependencies in terms of the sequence of distinct outcomes, such as social and institutional changes before the realisation of environmental improvements (Wehn et al. 2020b; Wood et al. 2020; Pólvora and Nascimento (2017).
A comprehensive CSIAF therefore needs to provide relevant impact domains as well as sufficient flexibility in the selection of relevant impact domains and respective outcomes. Our systematic review of existing citizen science impact assessment efforts confirmed the domains of society, economy, environment, governance, and science & technology.
Citizen science practitioners need to be able to plan and trace impact pathways in and across (a subset of) these domains. To do so, not only are sound distinctions between outputs, outcomes and impacts in each domain essential (Friedman 2008; Bonney et al. 2009b; Koontz and Thomas 2012), but also, causal relations between intermediary outcomes and impacts within a given domain, and between outcomes in different domains must be identifiable and traceable. Moreover, citizen science already is contributing to monitoring five SDG indicators and could contribute to 76 indicators, together amounting to 33% (Fraisl et al. 2020), providing not only data but a means for stimulating citizen action and informing and/or changing policy for SDG implementation. Therefore, it needs to be possible to select and adjust over time which SDGs the citizen science project intends to monitor and actually contributes to, as a project may pivot towards a different or additional goal.
Principle 3: Adopting comprehensive impact assessment data collection methods and information sources
Reliable impact assessment of citizen science projects involves a range of data collection methods and sources and ideally captures them not only from participants (i.e., citizen scientists) but also other relevant stakeholders and beneficiaries (Wehn et al. 2017; Guldberg et al. 2019) who can provide evidence of a range of (evolving) impacts. Some recent citizen science and citizen observatory projects have attempted more comprehensive reviews (e.g., Woods et al. 2019; Wehn et al. 2017, 2019b, 2020b). For example, Wehn et al. (2017) proposed and repeatedly applied (Wehn et al. 2019b, 2020a, b, c) a results-based approach that was complemented with relevant theoretical conceptsFootnote 5 and carefully designed data collection instruments and selected methods,Footnote 6 to capture the particular social, institutional and economic changes linked to the implementation of six citizen observatories that ultimately aim for improvements in the environment. This combination of project monitoring, validation and impact assessment provided a comprehensive feedback tool to inform improvements to the final citizen observatories and innovate specific aspects of the initiatives and technological tools (apps, online platforms). The way in which project partners, stakeholders and beneficiaries provide evidence needs to allow and guide them within a wide range of suitable methods of impact assessment data collection, but without being prescriptive (Phillips et al. 2012,2014,2018) to”…standardise good practice in evaluation rather than use standard evaluation methods and indicators” (p. 143) without consideration for validity of methods to cover wide range of citizen science practices and impacts (Reed et al. 2018). Such guidance towards good practice needs to encourage the provision of evidence of impacts whenever possible, including, for example, in supplementary material of papers reporting on citizen science impacts.
Moreover, data collection for impact assessment of citizen science activities under the CSIAF should allow its users (i.e., citizen science practitioners and impact researchers) to ‘practice what we preach’ by involving citizen scientists in the collection of evidence about impacts as they emerge over time, gathering measurements not only of ‘scientific’ indicators but also of community-defined successes (Hermans et al. 2011; Haywood 2015; Graef et al. 2018; Constant and Roberts 2017; Tricket and Beehler 2017; Arora et al. 2015; Jacobs et al. 2010) such as Community Level Indicators (Coulson et al. 2018; Woods et al. 2019, 2020).
Citizen science projects have different types and levels of resources (financial resources, time, networks and qualified staff) at their disposal for their impact assessment efforts which can affect the extent of their impact assessment efforts and hence the type and range of evidence that they can capture. The CSIAF should therefore provide sufficient and appropriate guidance, as well as links to relevant resources that it can be applied in both a ‘light-touch’ and more comprehensive manner.
Principle 4: Moving beyond absolute impact
The limitations of sticking to absolute and fixed measures of impact (typically quantified) are becoming increasingly evident, including in the field of citizen science. For example, Cox et al. (2015) acknowledge bias caused by quantitative comparison of impacts of longer running projects against those that have been running for a short period of time. Sound impact assessment needs to measure impact relative to the context and the goals and objectives of citizen science projects (Reed et al. 2018; Gharesifard et al. 2019b). The CSIAF needs to provide the means to enter and measure progress against project-specific objectives and to take context into account, including geographical context, socio-economic setting, available resources such as time, financial, staff, etc., and by providing comparisons to a different citizen science project, a non-citizen science project, or a lack of project.
Principle 5: Fostering comparison of impact assessment results across citizen science projects
As we argued from the outset, the diversity of citizen science projects in terms of thematic issues addressed, stakeholders involved, and extent and type of impact assessment undertaken, make it challenging to compare results across projects (Cargo and Mercer 2008; Hassenforder et al. 2016; DITOs Consortium 2016; Kieslinger et al. 2017; Wiggins et al. 2018), or to other frameworks such as the Sustainable Development Goals (Fraisl et al. 2020). Similar to current efforts to build in interoperability across data systems and platforms of citizen science projects (Bowser 2017; Masó and Fritz 2019; Masó and Wehn 2020), cross-comparison of impacts and data impacts would be a beneficial development for citizen science. A comprehensive CSIAF can enable comparability of impact assessment results that are based on different methods and information sources using consistent overarching categories of definitions (Phillips et al. 2012; Reed et al. 2018; Gresle et al. 2019). This could be done, for example, by capturing impact assessment results from different projects via a single online tool (e.g., questionnaire) (Gresle et al. 2019) based on the CSIAF and, during the visualisation of individual and compared results, by distinguishing validity levels (e.g., via a color scheme) according to the range of underlying data sources. This can serve to generate both, project-specific as well as aggregated results.
Principle 6: Cumulative enhancement of the framework over time
The collective advancement of impact assessment theory and practice in the field of citizen science relies on reflection and cumulative additions, based on insights across projects and methods. To remain relevant over time and serve the citizen science community, the impact assessment needs to be built on collective and cumulatively evolving intelligence, based on additional inputs and definitions by researchers and practitioners as well as more structured reflection and quality control (peer review) to check whether appropriate items, definitions and methods are being used.
A tiered level of indicators (similar to the SDG Tier 1–2 and 3 system of indicatorsFootnote 7) may be used to indicate the maturity level or peer review status of new indicators that are under review. A similar system may need to be set up and maintained for curation of the CSIAF. Communities of Practice (CoPs) such as the WeObserve CoPs, and related fora such as Working Groups of the European Citizen Science AssociationFootnote 8, can offer the continuity and space for practitioners to reflect on, discuss and refine CSIAFs. For example, the WeObserve projectFootnote 9 launched four Communities of Practice as a key mechanism for consolidating the knowledge within as well as beyond the WeObserve consortium. These CoPs serve as a vehicle for sharing information and creating new knowledge on selected key thematic topics related to citizen science and include one CoP dedicated to capturing the impact and value of citizen science. These fora have contributed to strengthening the knowledge base about citizen science in general and on citizen science impact assessment in particular.