The idea of the Builder interface was to allow the user to create a graphical representation of an experiment. From this, the software would then generate a Python script to actually run the experiment. We wanted something that would be cross-platform, open and free, and that would support Python programming when experiments needed extending. We also wanted to provide stimuli that were dynamic, with stimulus attributes that could be updated on each screen refresh as specified directly from the graphical interface, which was not possible (or certainly was not easy) using other graphical interfaces An image of the Builder interface can be seen in Fig. 1.
How does Builder work?
In PsychoPy Builder, an experiment is described by a set of Routines, which contain a set of one or more Components, such as stimuli and response options. The Components in the Routines can be thought of as a series of tracks in a video- or music-editing suite; they can be controlled independently in time—that is, onsets and offsets—but also in terms of their properties. The last part of the experiment description is the Flow: a flow diagram that controls how the Routines relate to each other. It contains the Routines themselves, as well as Loops (which repeat the Routines they encompass). The Flow has no “knowledge” of time per se; it simply runs each Routine immediately after the previous one has ended. The experimental timing is controlled by specifying the times of onset and offset of the stimuli and of response-gathering events within the Routines themselves.
This experiment description is internally stored in terms of standard Python objects: a Python list of Routines, each of which is a list of Components, which are themselves essentially a Python dictionary of parameters and, finally, a list of items on the Flow. Builder saves the experiment as standard XML-formatted text files using the open-standard psyexp format (read more at http://www.psychopy.org/psyexp.html). These files need not be specific to PsychoPy or Python; any system that can interpret a simple XML file could theoretically receive a Builder-generated experiment file and use that description to conduct the study, if it has a similar set of stimulus features.
The first job of the Builder interface is to provide a graphical means to create and represent these psyexp experiment descriptions. The second job is to be able to generate, from those descriptions, working code to actually run the experiments. This step is made relatively easy by Python’s powerful text-handling structures and object-oriented syntax. Users can compile and inspect the resulting script at the click of a button.
The generated Python code is well-formatted and heavily commented, to allow users to learn more about Python programming—and the PsychoPy package in particular—in a top-down fashion. This allows the user to adapt that output script and then run the adapted version themselves, although this is a one-way road—scripts cannot be converted back into the graphical representation.
Additionally, the Builder also provides a Code Component that allows users to execute arbitrary Python code at any of the same points available to standard Components (at the beginning of the experiment, beginning of a trial, every screen refresh, etc.). These Code Components allow the experimenter to add a high level of customization to the study without leaving the comfort of the Builder interface. This provides a balance between ease of use (via the graphical interface) and flexibility (allowing out-of-the-ordinary requirements to be implemented in custom code).
OpenSesame (Mathôt et al., 2012) has a similar goal of providing a graphical interface for specifying experiments. OpenSesame uses, among several options, the PsychoPy Python module as a back end to present stimuli and has become popular for its ease of use and slick interface. It differs from PsychoPy mainly in that (1) OpenSesame represents the flow in a nested list, similar to E-Prime, where PsychoPy has a horizontal flow with loops; (2) in Routines, the PsychoPy interface emphasizes the temporal sequence of components, where the Open Sesame interface emphasizes their spatial layout; (3) PsychoPy allows the experimental and stimulus parameters to vary dynamically on every screen refresh during a routine, whereas OpenSesame requires stimuli to be pregenerated; and (4) PsychoPy generates Python scripts that can be exported and run independently of the graphical interface.
How well does the Builder achieve its goals?
Intuitive enough for teaching
The aim was to allow nonprogrammers, including undergraduates, to be able to generate experiments. The best way for the reader to judge whether this aim has been achieved is perhaps to watch one of the walk-through tutorials on YouTube (e.g., https://www.youtube.com/playlist?list=PLFB5A1BE51964D587). The first of these videos shows, in 15 min and assuming no prior knowledge, how to create a study, run it, and analyze the generated data.
PsychoPy’s Builder interface is being used for undergraduate teaching in many institutions, allowing students to create their own experiments. At the School of Psychology, University of Nottingham, we previously used E-Prime for our undergraduate practical classes. In the academic year September 2010–2011, our first-year undergraduates spent the first half using PsychoPy and the second using E-Prime. We surveyed the students at the end of that year and, of the 60 respondents, 31 preferred PsychoPy to E-Prime (as compared with nine preferring E-Prime, and the remainder expressing no preference), and 52 reported that they could “maybe,” “probably,” or “definitely” create a study on their own following the five sessions with PsychoPy. PsychoPy has gained a number of usability improvements since then, and Nottingham now uses PsychoPy for all its undergraduate classes.
Flexible enough for high-quality experiments
We aimed to generate software that can implement most “standard” experiments to satisfactory levels of temporal, spatial, and chromatic accuracy and precision. In terms of features, the Builder can make use nearly all the stimuli in the PsychoPy library with no additional code. For instance, it can present images, text, movies, sounds, shapes, gratings (including second-order gratings), and random-dot kinematograms. All of these stimuli can be presented through apertures, combined with alpha-blending (transparency), and updated in most of their parameters on every screen refresh. Builder also supports inputs via keyboard, mouse, rating scales, microphone, various button boxes, and serial and parallel ports. It also supports a wide range of experiment structures, including advanced options such as interleaved staircase (e.g., QUEST) procedures. The use of arbitrary loop insertions, which can be nested and can be inserted around multiple other objects, allows the user to create a wide range of experimental flows. Figure 2 is a screenshot of one such Builder representation of an experimental flow.
At times, an experimenter will require access to features in the PsychoPy library that have not been provided directly as part of the graphical interface (often to keep the interface simple), or will want to call external Python modules beyond the PsychoPy library itself. This can be achieved by inserting snippets of custom code within a Code Component, as described above.
As evidence that PsychoPy is used by professional researchers, and not just as a teaching tool, according to Google Scholar, the original article describing PsychoPy (Peirce, 2007) now has over 1,800 citations. Most of these are empirical studies in which the software was used for stimulus presentation and response collection. The Builder interface is not only used by nonprogrammers, but also by researchers perfectly adept at programming, who find that they can create high-precision studies with greater efficiency and fewer errors by using this form of “graphical programming.”
Indeed, several of the authors of this article use the Builder interface rather than handwritten Python code, despite being very comfortable with programming in Python. Overall, the clearest indication that people find PsychoPy both easy to use and flexible is the growth in user numbers since the Builder interface was first released (see Fig. 3). We have seen user numbers grow from a few hundred regular users in 2009 to tens of thousands of users per month in 2018.
Precision and accuracy
The Builder interface includes provision for high-precision stimulus delivery, just as with the code-driven experiments. Notably, the user can specify stimulus durations in terms of number of frames, for precise short-interval timing. PsychoPy will handle a range of issues, such as ensuring that trigger pulses to the parallel port are synchronized to the screen refresh. Builder-generated scripts are oriented around a drawing and event loop that is synchronized to the regular refresh cycle of the computer monitor. Hence, in general, the presentation of visual stimuli is both temporally accurate (being presented at the desired time and for the desired duration) and precise (with little variability in those times). One published study suggested that the temporal precision of PsychoPy’s visual stimuli was poor (Garaizar, Vadillo, López-de-Ipiña, & Matute, 2014), but this was an artifactual finding due to the authors using a prototype version of the Builder interface (v1.64, from 2011, which did carry an explicit warning that it should not be used for precision studies). The authors subsequently reran their analysis, using an official production-ready release (v1.80, 2014). Using the timing approach recommended in the documentation, they found very good timing of visual stimulus display, for “normal usage” (Garaizar & Vadillo, 2014).
A current limitation of temporal stimulus accuracy and precision, however, is the presentation of sound stimuli. There can be a lag (i.e., impaired accuracy) of sound onset, potentially up to tens of milliseconds, with associated trial-to-trial variability in those times of onset. Sound presentation relies on one of several underlying third-party sound libraries, and performance can vary across operating systems and sound hardware. The authors are currently conducting objective testing of performance across all these factors and updating PsychoPy’s sound library to one with better performance.