In addition to the previously outlined evaluation criteria that apply to participation processes in general, we suggest novel criteria that incorporate tools and media used in digital participation processes: First and foremost, the whole process and every tool used therein must respect the privacy of the participants and be compliant with applicable laws and regulations concerning data security and privacy. This may be a challenging task because it means balancing two contradicting requirements. On the one hand, transparency is crucial within the process. This is necessary to attribute contributions to specific participants and to allow feedback and collaboration—in particular when online and offline tools are mixed. On the other hand, anonymity may be desired (and legally required) for external communication and long-term documentation.
Besides that, the tools used in the process must of course function flawlessly. One may derive additional evaluation criteria from these technical requirements. This may for example concern technical robustness, operational safety, speed and cost-effectiveness of the hardware and software.
Massive digital participation systems comprise a sequence of distinct phases with different intermediate goals. In the following subsections, we outline possible operationalisations of the criteria for these phases, using the U_CODE process (see Fig. 1) as a paradigmatic example (see Table 1 for an overview of the possible operationalisations).
Table 1 Proposed operationalisations of criteria for the evaluation of massive digital public participation processes using the phases of the U_CODE process as an example All Phases
Regarding most of the evaluation criteria described above, possible operationalisations would be very similar in all phases (although they should of course be measured separately for each phase). In particular, this pertains to representativeness, inclusiveness, external transparency, the quality of the digital tools and the effects on the participants. We address the following criteria first:
During each process step, the participants should be representative of the people affected by the design. The data necessary for statistically analysing the demographic characteristics of the participants may be readily available within the process (e.g. through a registration process or via associated social network accounts), or it may have to be actively collected. However, the evaluation should cover not only the make-up of the participants themselves but also the weights of the output from each demographic subgroup in each step. In order to achieve this, the contributions of each individual (utterances, design proposals, inquiries on other participants’ ideas) must be counted and be related to the demographic characteristics.
For the evaluation of inclusiveness, the provision of information on the procedure and the editing and presentation of the subject matter should be assessed. It must also be determined whether the process designers accommodate the different communication habits and abilities of special participant groups. For an evaluation, this may mean checking whether plain, clear language is used and technical jargon is avoided (as outlined in a number of style guides; e.g. Directorate-General for Translation 2016). Also, the digital tools employed in the process should be easily usable and take into consideration different physical or cognitive disabilities on the part of the participants. In this regard, evaluators may refer to pertinent technical guidelines for measurable operationalisations, e.g. the international standards on accessibility (ISO/TC 159 Ergonomics 2008; esp. the list of requirements in Annex B and the checklist in Annex C.1).
To evaluate external transparency, it may be investigated to which extent—and in which quality—the information collected and presented during the process is being compiled to form some sort of documentation which allows for external parties to easily comprehend each step and each result at any time during or after the process. The process facilitators use should not only compile this output after the process but also constantly feed it out of the process, via automated channels or in an edited format. While the accessibility and comprehensibility of the output may be evaluated via judgements of communication experts, verifying whether it has indeed been accurately understood by the public should involve an investigation involving the public itself, e.g. by interviewing diverse members to ascertain the accurate understanding of the process output. The evaluation may also focus on those persons within the process who are responsible for public relations. Evaluators may check not only whether they are constantly informing the media, relevant stakeholder organisations and political decision-makers but also whether they are available to competently answer inquiries from the public.
The quality of the digital tools is also important for all process phases, concerning both basic technological functioning in general and data protection requirements in particular. For an evaluation of the former, one may, for example, analyse the number and frequency of automatically logged software crashes (relative to the number of active users) or user judgements regarding the rendering speed of virtual reality applications. Further parameters of interest in this regard are the amount and quality of the processed data. This includes questions such as the completeness of data sets, appropriateness of the data formats or the availability of metadata. Regarding aspects of data protection and privacy, an evaluator may consider seeking expert judgements from the responsible data safety officers.
The effects on the participants could be measured by administering questionnaires to the participants at the beginning and end of the process, with questions measuring the factual knowledge the participants have of the subject matter and/or self-assessments regarding the perceived level of understanding. Comparing the two measurements, the evaluators can determine the extent to which the process has led to improvements in understanding the subject matter. The participants’ motivation to participate in future processes could be measured by asking questions relating to the satisfaction with the process, the subjective assessments of deliberation quality and likelihood to participate in future participation processes. In the case of digitally mediated public participation, the process satisfaction must also cover the technological aspects of the process, i.e. the satisfaction with the used tools and a willingness to use them again in the future.
Phase 1: Process Initiation
Starting from a general idea, an ‘initial brief’ will be created which roughly outlines the envisioned project.
The initial brief may include information on the project scope, relevant stakeholder groups and general objectives. It will be co-created by the project initiator and the super moderator; hence, a critical factor for internal transparency will be the quality of the communication between these two actors. It may be possible to assess the communication quality by analysing documents created in this phase (e.g. e-mails) or by interviewing the actors.
In the previous section, we generally outlined possible operationalisations for the assessment of the effects on the participants. Because the process initiation phase does not directly involve any participants, only a selection of these operationalisations apply to it. In the process initiation phase, effects on the participants will primarily concern the degree to which the public accurately understands the project’s objectives. This may be ascertained using interviews or questionnaires.
Phases 2 and 3: Co-briefing and Co-design
In the co-briefing phase, the initial brief will be enriched by requirements regarding the project contributed by process participants, using digital brainstorming and idea-harvesting tools. In the co-design phase, digital co-creation tools will be used to create a professional design brief, consisting of (low-level) design proposals. Both phases are similar regarding internal transparency and facilitation of deliberation:
To ensure internal transparency, it is crucial that instructions for the task(s) to be carried out in the respective phases are readily available and that they are correctly understood by the participants. Also, it must be clear what the current task aims at and which role it has in the overall process. Since co-briefing and co-design may be novel tasks for many participants, special emphasis may have to be put on explaining their aims, the differences between the two tasks and their roles within the overall process. An evaluation may employ questionnaires at the end of the respective phase, asking the participants for subjective assessments regarding these issues. Also, to identify potentials for improvement concerning the availability and comprehensibility of the information provided in these phases, digital support systems may be analysed (if present). Counting the number of questions, and particularly the frequency of questions relating to very similar aspects, potential weaknesses may be identified.
To facilitate deliberation, the tools employed in the process must enable the participants to effortlessly express their own ideas and to easily understand other participants’ ideas in order to build upon them. This is crucial for the collaborative quality of co-briefing and co-designing. The perceived ease of expressing, understanding and building upon ideas, as well as the participants’ readiness to adopt and engage with other participants’ ideas, may be assessed using questionnaires administered to the participants at the end of the respective phase. Also, evaluators may analyse documents or inquire with the process designers regarding possible strategies used to facilitate deliberation. To evaluate the actual deliberation quality, the evaluation may develop evaluation methods specific to each tool, e.g. focusing on the number of ideas created or built upon, the number of contributions to discussions or the number of incivilities reported to moderators during the process. Furthermore, if monitoring tools like sentiment analysis and opinion mining (Liu 2012) are used within the process, they may also be employed to automatically and continuously gauge the deliberation quality, e.g. by assessing the civility of the interaction between the participants.
Phase 4: Professional Design
The previous two phases resulted in a co-designed project brief and low-level design proposals, which together make up a brief for the professional design phase. In this phase, professionals create design proposals, possibly in the format of a conventional design competition. While following their established work procedures, the design firms may communicate their ongoing work to the public and receive feedback, for example via sentiment analysis.
To warrant internal transparency of this phase within the overall process, the design tasks carried out by the professionals should be thoroughly communicated. For example, it may be explained to the public how the professional design process works, i.e. which inputs are used by the design professionals (and which are not used) and which intermediary steps are taken within the process. The public in general, and the process participants in particular, may also need to be instructed on how to interpret the output of the process, i.e. the professional design proposals. Since most laypeople are not accustomed to reading architectural plans and understanding handcrafted or virtual architectural models, dedicated strategies for architecture communication may be necessary to facilitate the next steps (integration and voting). For the evaluation, questionnaires may be used via which the participants are asked to judge how well they understand the professional design process and the resulting design proposals.
In order to facilitate deliberation, the process designers may provide feedback tools which allow for rational discussions of the intermediary products of this phase. Such feedback platforms may be moderated to ensure respectful and fruitful discussion, e.g. by citizen moderators (for details regarding this idea, see Section ‘Facilitation of deliberation’). For the evaluation, the participants may be asked via questionnaires about their perceptions regarding the deliberative qualities of the platform and the quality of its moderation. Also, if sentiment analyses are conducted within this phase to gather feedback on the designs, they may also be used to gauge the deliberation quality, e.g. by assessing the civility of the interaction between the participants. For the evaluation, on the one hand, the results of sentiment analysis may be used to assess the deliberation quality. One the other hand, the evaluation may try to establish how well the sentiment analysis itself was conducted and how much it contributed towards the aim of objectifying the process, i.e. which consequences were drawn from it.
Phase 5: Integration
In the integrating phase, the co-design brief, the design proposals, the professional designs and information from analyses of the public sentiment are integrated, using a gallery tool which allows for discussion and voting. A final design proposal which enjoys broad participant support constitutes the output of this phase.
Regarding the evaluation of internal transparency and facilitation of deliberation, this phase largely calls for the same operationalisations as in phases 2 and 3 (co-briefing and co-design).
In addition, regarding internal transparency, in the integration phase, it will be of additional importance that the voting process is transparent, i.e. it must be clear which differences exist between the alternatives to be chosen from and how the voting process works. The evaluation may rely on expert judgements regarding these issues, but also use questionnaires to assess the participants’ perception of the transparency.
For the facilitation of deliberation, the gallery tool will need to allow for discussion and voting. The tool itself, as well as the quality of possible moderation efforts therein, can be evaluated using the same approach as the evaluation of the feedback tool in phase 4 (see above).
Phase 6: Voting
In this last phase, the final design proposal will be approved by the project initiator and later be handed over to the authorities, in order for them to continue the legal process leading to the implementation of the design proposal.
In this phase, the evaluation may focus on the quality of communication between the involved parties and the transparency of the decision-making. Similar to the evaluation of internal transparency in phase 1, evaluators may consider assessing the communication quality by analysing the documents created in this phase (e.g. e-mails, resulting planning documents) or by interviewing the actors. Above that, one may argue that only when formal decision-making accepts and implements the outcome of a participation process, this process will have been successful. Hence, if the timing of the evaluation allows, the actual implementation of the voting results should also be evaluated.