Our aim for the study framework (called runs in formr, for historical reasons) was to offer high flexibility while still making common study designs easy to implement. To this end, we implemented a programming environment using IF conditions and GOTO statements. In practice, this means that researchers specify a simple sequence of controls resembling those of a tape deck—Stop, Pause, Skip Forward, Skip Backward, and Shuffle—together with three special controls: for Surveys, for sending Emails, and for External calls (e.g., for sending text messages). Not unlike tape decks, GOTO statements are fairly ancient technology, but we think they work well for the simple programming that most studies will require.
All runs need to end in a Stop unit, which functions as the endpoint of the study. Stop units can be configured to show text to the user and can be used to give psychological feedback or tell users they cannot participate. In the latter case, there will be two stop buttons, one for those who cannot and one for those who can participate. A simple one-shot survey, such as those most commonly used in marketing research or cross-sectional psychological studies, would be a Survey unit followed by a Stop unit (see Fig. 2).
As an example of the more complex behavior that formr is capable of, consider a simple daily diary study, illustrated in Table 3.
For the study specified in this framework, almost anything can be customized. In the settings, researchers can change the link for the study, its title, its header image, and a footer for each page (e.g., providing contact information for the study team). Files can also be uploaded in order to include images, sound, or video elements in the study.
At the level of the study framework, several logs are kept. One is the log of a user’s positions in the run, allowing researchers to track who was where and when. The software also logs sent emails and the results from automatic processing, such as API calls.
Testing a study in formr
Because formr allows for fairly high complexity, we found it important to furnish researchers with tools to test that their studies operate as intended. Chief among them are test accounts. Study administrators can create new test accounts at the click of a button. They then receive a link that they can use themselves, as well as send to research assistants and co-researchers. The link defaults to a random animal name (this can be customized), so that administrators can test the study in different conditions or under different circumstances: For example, one could assign a student assistant the moniker tenderUnicorn to test the study as a single father, awkwardTurtle to test it as a teenager, lazyStarfish to contribute a lot of missing data in the diary, slowCheetah to respond very late after the diary invitation, mobileMoose to test on an Android smartphone, and so on. Testers can then report whether the study worked well under their testing conditions, whether any questions sounded odd to their persona, and whether the generated feedback made sense. Because entering data manually is often an unnecessary chore, especially for repeated surveys, test accounts automatically have a small “monkey bar” at the bottom of the survey. At the click of a button, testers can fill out the surveys using dummy data. They can also jump to different positions in the study sequence, delete their data, and end pauses prematurely. The data from these testers can be downloaded and checked for completeness.
Study administrators can also test surveys in isolation, but since this leaves out much of the complexity that tends to cause problems, we only recommend this for initial testing; researchers should thoroughly test complex studies before they are rolled out to real participants. Initially enrolling only a small number of participants can also be a wise strategy, for further, real-life testing. In longitudinal studies, we recommend keeping at least one tester enrolled for the duration of the study.
When the testing mode is turned on, any R code that is executed is tracked for debugging. If code errors occur, the code and error messages are automatically shown. If no code errors occur, testers can show the code using the magnifying glass and download it onto their computer to debug in a program such as RStudio (which has advanced debugging capabilities). If the testing mode is turned off, less informative error messages are shown, so as to avoid confusing real participants or accidentally disclosing too much information about the study.
Monitoring an ongoing study
Longitudinal studies often require that communication with participants and special cases be managed. The main command center for this is the Users section in the study administration.Footnote 17 The Users section offers a way to move users around in the run—for instance, to deactivate users or correct misplacements resulting from incorrectly configured control flows. It can also be used to send preformulated reminders. Perhaps most usefully, if participants report problems, the administrator can enter the study pretending to be the user (spy button) after enabling the testing mechanisms (stethoscope button) in order to gather more useful debugging information. Importantly, users are listed by their anonymous tokens, not by identifying information. If personal data have been properly separated from the research data as outlined above, it is possible to administer participants and fix problems without seeing personal data.
For some researchers, it can also make sense to program a custom R Markdown script in the Overview section, where they can define data aggregations and subsets to examine the data as they trickle in. Researchers can use formr to regenerate this report on the fly. It can be an additional tool with which to monitor data quality or even how much evidence has been accumulated for an a priori hypothesis, with a sequential-testing framework (Lakens, 2014; Schönbrodt, Wagenmakers, Zehetleitner, & Perugini, 2017).
Sharing a study or run
Users can export the entire study with all settings, optionally including all survey items, as a JSON file. This makes it easy to share study designs with other researchers and allow them to reproduce and extend the work. Researchers can also design components such as a peer rating, a customized reminder, or personality feedback (see Fig. 3 for an example), so that other researchers can easily mix and match components to create new studies.