Introduction

In all disciplines of modern experimental psychology and cognitive neuroscience, most empirical work in the laboratory is based on the use of standard personal computers, and the acquisition of data requires very high accuracy in the timing of the presentation of the experimental stimulus materials, as well as the recording of the participant’s responses. To facilitate the creation of time-accurate experimental procedures, researchers need specialized software tools that implement the necessary procedures and provide high-level abstraction. This software can generally be classified into two categories: experiment builders and programming libraries.

Experiment builders are full applications with a graphical user interface, through which the user creates experimental settings by arranging readymade pieces on a visual sketchpad (in a point-and-click manner). This category includes a variety of commercial (see Stahl, 2006, for a review) as well as open-source (e.g., Mathôt, Schreij, & Theeuwes, 2012) applications. Although experiment builders provide an intuitive way for nonprogrammers to rapidly create experiments, they suffer from the disadvantage that the use of a graphical interface results in a lack of flexibility and control. First, the scope of experimental settings that can be created is limited by the application itself, and second, crucial aspects of the resulting experiments will be controlled by the experiment builder software and not by the user. For many experimenters, these limitations represent a serious problem, and sometimes they even make it impossible to implement the required experimental protocols. It is therefore often necessary to develop custom software and to base the implementation of an experiment on classical programming or scripting languages.

For this purpose, specific programming libraries for experiments have been developed. These provide collections of data structures and routines, adding specific functionality to existing programming languages. A popular example for this category is the open-source Psychophysics Toolbox (Brainard, 1997) for the commercial MATLAB programming language (MathWorks Inc., Natick, MA). Using a programming library specializing in experimental settings and having the ability to combine it with other libraries ensures flexibility and puts the user in control.

In the present article, we present Expyriment (pronounced ), a library written in and for the Python programming language (Van Rossum & Drake, 2011), aimed at designing and conducting behavioral, as well as neuroimaging experiments. Python has also been used in related projects that have aimed to offer solutions for the control of behavioral experiments or the timing-critical presentation of stimuli, such as PsychoPy (Peirce, 2007) or Vision Egg (Straw, 2008). We specifically chose to develop for the Python programming language because we believe that it is especially suited for scientific computing and is ideal for the development of script-based experiments, for a number reasons: First, Python is open-source software and is freely available to researchers as well as to students. Second, being platform independent, Python runs on Windows, Linux, and Mac OS. Third, Python has the reputation of being one of the easiest programming languages to learn (even for nonprogrammers), mainly due to the clear and simple syntax, making it a very popular tool for researchers worldwide (Bassi, 2007). Furthermore, the easy syntax can result in very readable programming code (an important concern for a scientific community in which algorithms and routines will be shared amongst researchers). Fourth, Python is an interpreted language, allowing for rapid write–test cycles during development and immediate testing of short snippets of code (via the interactive interpreter). Fifth, in contrast to other open-source programming languages commonly used in the scientific community, such as R (R Development Core Team, 2012), Python is not only applicable for particular tasks (e.g., data analysis), but aims to be an all-purpose programming language. As a final, and maybe the most important, argument for choosing a particular open-source software for scientific purposes (cf. Halchenko & Hanke, 2012), Python has a large and active community, which ensures the continuous maintenance and development of the language and offers extensive documentation and support. Due to the broad scope of applications and the community-driven approach, programmers are not restricted to the large standard library with which Python is natively equipped, since a huge number of free third-party libraries are nowadays available to extend Python’s functionality. For instance, NumPy (Oliphant, 2006) and SciPy (Jones, Oliphant, Peterson, et al., 2001) are libraries specialized in scientific computing and data handling.

Importantly, Expyriment builds on these advantages and represents a cross-platform library that has been developed to run under all common desktop operating systems, such as Windows, Linux, and MacOS. Additional experimental support for Android devices exists and is under active development. Expyriment has been intensively tested under Windows XP and 7, as well as under Linux distributions based on Debian Linux.Footnote 1 Expyriment is free software and released under the Open Source GNU General Public Licence (Free Software Foundation, 2007).

The development of the Expyriment was guided by several motivational principles:

  1. 1.

    Expyriment is a programming library and is not meant to be a full application. It does not rely on a graphical user interface and does not implement an internal control loop to automatically handle events in the background. Instead, we have focused on providing a lightweight Python library entirely written in Python itself, with only a few dependencies on other Python libraries (see below), in order to give the user full control over timing-critical event handling.

  2. 2.

    Expyriment has a strong focus on experimental design. That is, in contrast to other existing Python libraries—for instance, PsychoPy (Pierce, 2007) or Vision Egg (Straw, 2008)—an Expyriment program is not necessarily centered around stimulus presentation (i.e., the creation and manipulation of experimental designs is an integral part of the library itself, and Expyriment can even be used solely for this purpose). Expyriment offers a structure for experimental designs that is independent of the presentation software actually used and that can be exported as pure text files. By building a hierarchical relation between the constituents of an experiment, the transition from thinking about the conceptual design to its technical implementation is facilitated to a great extent. Expyriment therefore can also be helpful for researcher who want to use other software for the presentation of stimuli and experimental control.

  3. 3.

    Expyriment follows a modular approach. This means that parts of the library can be used independently from the rest of the library and in conjunction with other Python libraries (see, e.g., Point 2).

  4. 4.

    Expyriment is easily extensible and offers a built-in plugin system. This plugin system allows for the unified use and sharing of user-made experimental stimuli, devices, or design elements. More detailed information regarding the development of Expyriment plugins can be found in the Expyriment online documentation.

  5. 5.

    Expyriment strongly emphasizes the readability of the code. Since researchers share experiments with other researchers, colleagues, and students, an experiment’s source code should be as readable and easily understandable as possible. On top of the easy syntax offered by the Python programming language, Expyriment is strictly object-oriented, which further increases clarity. Expyriment shares the goal of code readability with most other Python-based experiment control platforms, but it extends this readability principle to the specification of the experimental design. The resulting programming code should most directly resemble the structure of the study, and should in the first place facilitate researchers’ thinking about the experimental design. Expyriment therefore allows the hierarchical formalization of abstract designs in Python code, independent of the programming of the stimulus presentation and experimental control.

  6. 6.

    Like most Python libraries, Expyriment also aims to be cross-platform. Although most commercial software supports Windows, and sometimes Mac OS, Linux is often left out of the picture or is an afterthought. Expyriment was especially developed with Linux users in mind, and each version of Expyriment is tested intensively on Debian LinuxFootnote 2 and on the Debian-based Linux distribution Ubuntu.Footnote 3

Expyriment is freely available for download from www.expyriment.org. The website further provides links to the online documentation, a tutorial, example experiments, and the Expyriment newsletter and mailing list. The version of Expyriment described in the present article is 0.6. In the following sections, we will first provide a description of the structure and functionality of the library. Afterward, we will showcase the use of Expyriment with an example experiment. Finally, we will provide technical information on the implementation and empirical results on timing accuracy.

Overview

Structure

The Expyriment library follows a strict modular and object-oriented structure. It is organized into five packages, focusing on different areas of an experimental setting. After importing Expyriment in a Python script or interactive session, the following subpackages become available: design, stimuli, io, misc, and control. Each package consists of classes, methods, or further modules. Figure 1 provides a graphical overview of the library.

Fig. 1
figure 1

An overview of the structure of the Expyriment library. Expyriment consists of five packages (red): design, stimuli, io, misc, and control. Each package provides various classes (green), functions (white), and further modules and default settings (yellow). A plugin system (violet) allows for user-written extra functionality

In Expyriment, all subpackages have a defaults module that contains variables describing the default values for most classes and certain global settings within that package. That is, for each class, the default values of optional arguments that can be specified during instantiation are set via variables in the default module. Importantly, these default values can be overwritten by the user in order to make specific global settings for each experiment.

The design, stimuli, and io packages allow for the integration of user-written plugins (called extras) in order to extend Expyriment with, for example, specific randomization procedures, more advanced stimuli or custom-made response devices. The role and functionality of each package can be summarized as follows.

expyriment.design

The design package provides classes describing experimental structures. The main role of this package lies in the specification of experimental designs. This is done by building a hierarchical relation between an experiment, the experimental blocks, and the experimental trials, and by specifying between- as well as within-subjects factors. On the basis of this specification, the design package is further capable of randomizing or permuting the experimental blocks or trials within the defined experiment. As a simple example, consider that you want to create an experiment with two blocks, each containing 60 randomized trials of three different conditions A, B, C:

figure a

Importantly, designs can be exported to text files [by calling in the example above]. When specified without using any Expyriment presentation feature, such as stimulus objects,Footnote 4 the design package can be used in combination with other Python libraries or programming environments for stimulus presentation that either do not offer experimental design structures (e.g., Vision Egg) or that rely mainly on importing externally created designs (e.g., PsychoPy, using its function). Using an interactive Python session, Expyriment can therefore also help researchers to formalize and review complex designs or randomizations.

expyriment.stimuli

The stimuli package provides classes for a variety of experimental visual as well as auditory stimuli. The role of this package thus lies in the definition and creation of stimuli. Once a stimulus is created, it can be integrated into the hierarchical design structure, completing the definition of the experiment. Importantly, stimuli can be automatically presented on the display and do not rely on a separate external presenter—for example,

figure b

To create more complex visual scenes, visual stimuli can also be plotted on other stimuli. The resulting combined stimulus can then be preloaded and/or presented as a whole. This always ensures precise timing, even for complex stimuli:

figure c

Another way to achieve the simultaneous presentation of several stimuli is to consecutively present them without clearing the display contents and only to update the display after the presentation of the last stimulus:

figure d

Although the first method is often preferable, since it allows for reducing visual scenes into a single object, the second method can be useful when a screen has to be built up by incrementally adding stimuli—for instance, on the basis of user input.

Updating the screen can also be done explicitly by calling the method of the screen object (see the Expyriment.control section below). If you want to maximize the speed with which your stimulus appears on the screen, in some cases this might require updating only that part of the screen on which the stimulus appears. This is possible by calling the method of the screen object and specifying a list of stimuli to update as an argument. Note, however, that this will only work as expected when OpenGL mode is switched off (see the Software evaluation and timing section below).

expyriment.io

The io package provides classes for handling input and output. Its main role is to facilitate communication with external devices (e.g., keyboard, button box, etc.), as well as to handle log files. Most io classes can also be used independently from the other Expyriment packages—for example,

figure e

expyriment.misc

The main role of the misc package is to provide additional functionality that goes beyond the scope of the other packages—for instance, the functionality to preprocess the acquired data files for use in further statistical software (see the Additional features section below). This functionality does not depend on Expyriment-specific data and thus can easily be integrated into existing Python code—for example,

figure f

expyriment.control

The task of the control package is to control the implementation of a formerly defined experiment. It provides the three important functions to initialize, start, and end an experiment and facilitates interaction with the display, clock, and keyboard, as well as with data and event files, by automatically integrating them into the current experiment definition. With a complete hierarchically structured definition of an experiment, conducting the experiment consists of nothing more than sequentially iterating over the hierarchical structure.

The control package plays a central role for the processing of experiment code, since it is involved in the building of the scaffolding of each Expyriment program. A typical script comprises three central commands, which can be described as the crucial landmarks for the flow of the experimental control program:

figure g

Landmark 1 initializes an experiment. If no object is given as a parameter, a new experiment is automatically created and returned by this function. Experiment initialization will create a screen object representing the computer’s display (available as ), a keyboard input device object to receive input from the keyboard (available as ), an event log file object that automatically logs stimulus presentation times and device communication in the background (available as ), and an experimental clock object providing timing functionality (available as ). After this landmark, the experimental design hierarchy and the experimental stimuli can be created. Landmark 2 starts the currently initialized experiment. This will ask for a subject ID on the display (afterward available as ) and create a data file object that can be used to log experimental variables on a trial-by-trial basis (available as ). After this landmark, the experiment can be conducted by iterating over the hierarchical design (see below). Landmark 3 ends an experiment, which will close the screen and save all unwritten log files to disk.

The control package also includes the Expyriment test suite (see the Additional features section below), as well as some global settings concerning the execution of the experiment (e.g., display, audio, and event-logging settings, set via the defaults module).

Additional features

Test suite

When using software to control the implementation of a scientific experiment, it is absolutely necessary to guarantee proper functioning. The Expyriment test suite is a visually guided tool for testing several aspects of Expyriment on a specific system. The tests include timing accuracy of visual stimulus presentation, audio playback functionality, mouse functionality, and serial port functionality/use. Eventually, all test results can be saved as a protocol, together with various information about the system that Expyriment is running on. The test suite can be started by calling . We strongly recommend always using the test suite before testing participants, in order to guarantee proper functioning.

Develop mode

During the development of an experiment, it can be convenient to change some of the default settings in the control package—such as starting the experiment in a small window (instead of occupying the full screen), suppressing the startup and ending messages, automatically creating successive subject IDs (without asking for one), and switching off time stamps for output files. To conveniently activate all of these common settings at once, the control package allows for switching into a dedicated development mode by calling before initializing an experiment.

Command line interface

Expyriment comes with a command line interface that allows for starting the test suite, a graphical tool to browse the application programming interface reference, as well as making specific settings (e.g., running in develop mode) for a single execution of an Expyriment script, without manipulating the script file. A full description of the available options can be obtained by typing from a command line.

Data preprocessing

In most cases, the data acquired by Expyriment needs to be processed further before a statistical analysis can be performed. This processing entails aggregation of the dependent variables over all factor-level combinations of the experimental design. Expyriment provides an easy, but flexible way to automatize this process with the data preprocessing module included in the misc package (). Further information can be found in the Expyriment online documentation. The Expyriment website also provides an R script for conveniently reading the Expyriment data files of several subjects into a single R data frame.

Use

To showcase the use of Expyriment, we will design and implement a simple behavioral experiment for assessing a Simon effect (Hommel, 1993; further examples can be found at the Expyriment website). In two experimental tasks, participants have to respond to a rectangle on the screen, according to its color (red or green), by pressing the left or the right arrow key on the computer’s keyboard. Additionally, the position of the rectangles can be either left or right. Each trial will start with the presentation of a fixation cross for 500 ms, followed by the rectangle that will remain on the display until a response is given. Between trials, a blank screen is shown for 3,000 ms. Each block will contain 128 trials in random order. The two tasks will differ only in the mapping of responses (i.e., which button to press for which color), which will be shown to the participant as a brief instruction at the beginning of each block. The order of tasks will be counterbalanced over participants. The experiment has a 2 × 2 × 2 × 2 factorial design, with the within-subjects factors Color (red, green), Position (left, right), and Task (left = green, left = red), as well as the between-subjects factor Task Order (left = green first, left = red first).

Example program

The resulting programming code is shown in Listing 1 in the Appendix. After importing all of the five Expyriment packages (line 1), we start creating the experimental design. First, we create an experiment object that will be the root of the design hierarchy (line 5). We then utilize the control package in order to initialize the just-created experiment (line 6). Next, we build the design hierarchy by iterating over all levels of all factors in a nested fashion (lines 9–24). For both of the two tasks (left = green, left = red), we create a block object and set the block factor Task to the corresponding level (lines 10–11). Within each task, we then create a trial object for all combinations of location (left, right) and color (red, green) and set the trial factors Position and Color to the corresponding levels (lines 12–17). Note that in both cases, we actually loop over lists with two items: the factor-level name and additional concrete values to be used when creating the stimuli (lines 12–13). Having defined a trial, we now create the target stimulus (a rectangle object ) with a position and color defined in the loop (lines 18–19). The stimulus is added to the trial that we just created (line 20), and the list of all stimuli of this trial (including exactly one stimulus now) is now accessible as . We next add 32 copies of the trial to the block (line 21), which will now be accessible as . To complete the hierarchy, we shuffle all trials in the block (line 22), and eventually also add each block to the experiment (line 23), such that they are available as . The experiment itself gets a between-subjects factor Task Order, with the levels “left = green first” and “left = red first” (line 24).

After having created the experimental design, we define and preload two global stimuli: a blank screen object and a fixation cross object (lines 27–30).

Eventually, we start the experiment by using the control package (line 33). We define what we want to log by naming the variables of interest, which will be the first entry of each column in the data file (line 34). Since we want the order of the two tasks (which we defined as blocks) in our design to be counterbalanced across participants, we now swap the blocks if necessary, depending on the between-subjects factor that is coupled to the subject ID that was assigned when the experiment was started (lines 35–36). Having a full definition of the experimental design in one single object () now allows us to simply iterate over this structure (lines 39–52): For each block in our experiment, we create and present a text screen object with simple task instructions, as defined by the block factor Task (line 40), and wait for a (any!) buttonpress response by the participant (line 41), after which we present a blank screen (line 42). For each trial within that block, we first wait for 3,000 ms (the intertrial interval). During this time, we preload all stimuli within that trial (in our case, only one, the target stimulus) into memory to prepare them for a timing accurate presentation (line 44). Note how this mechanism works: Preloading of the stimuli will return the time that it took to finish this operation, which in turn we subtract from the total waiting time (the 3,000 ms). We now present the fixation cross to the screen (line 45) and wait for 500 ms (line 46). Then, the target stimulus (the rectangle) is presented (line 47), and we wait for the participant to respond with either the left or the right arrow key (lines 48–49). The pressed key and the response time are returned. Next, we present the blank screen (line 50) and add our variables of interest to the data file (line 51). This is a fast operation, since at this point in time the data will not yet be written to the hard disk, but will remain in memory until the experiment is ended. At last, we unload the stimuli of the trial again (in this case, the target stimulus only), in order to free memory (line 52). To finish the experiment, we utilize the control package (line 55) and present a goodbye message to the participant.

Event logging and data output

After the experiment has been ended correctly, two files will be created: an event log file, with the ending in a directory called , containing an automatic history of the experimental events (e.g., stimulus presentations and device communication), and a data file, with the ending in a directory called , containing what was manually saved in line 51 on a trial-by-trial basis. Both directories are located in the same place as the script holding the experimental code. All event logs and the data files are named according to the experiment name, followed by the subject ID and, if not otherwise specified, a time stamp.

By default, the automatic event logging will contain a detailed description of the experimental design (including a full listing of all trials) as well as stimulus presentations and the expected input/output (I/O) events (i.e., when explicitly waiting or checking for keyboard or button box responses). It is furthermore possible to activate extended event logging, which will include even more detailed information—for instance, screen operations (updating and clearing) or the full stream of I/O events polled from the serial port. In total, three levels of event logging can be set via the defaults of the control package before initializing an experiment:

figure h

Implementation

Expyriment is based on Python 2 (≥2.6) and several Python libraries that implement low-level routines for timing-critical communication with hardware components. PyOpenGL (≥3.0; PyOpenGL, 2012) is used to present visual stimuli onto the display in a timing-accurate manner. Pygame (≥1.9; Shinners, 2012) is used for auditory stimulus presentation and visual stimulus creation, as well as for interacting with the computer’s keyboard, mouse, and game port. PySerial (≥2.5) and PyParallel (≥0.2; PySerial, 2012) can be used to interact with a serial and a parallel port, respectively. Additionally, NumPy (≥1.6; Oliphant, 2006) is needed in order to use the built-in data preprocessing functionality. More detailed information about the installation procedure can be found at the Expyriment website.

Software evaluation and timing

We empirically tested the timing accuracy of the visual and auditory stimulus presentation, as well as serial port communication. All tests were performed under Windows XP SP3 (Microsoft Corp., Albuquerque, NM) installed on an HP DC7900CMT personal computer with 4 GB internal working memory (Hewlett Packard Co., Palo Alto, CA), equipped with a Core2Duo processor E8400 (Intel Corp., Santa Clara, CA), a Samsung SyncMaster 2233RZ display operating at 60 Hz (Samsung Electronics Co., Ltd., Suwon, South Korea), a Quadro NVS 290 video card (Nvidia, Santa Clara, CA), a Soundblaster Audigy sound card (Creative Technology Ltd., Jurong East, Singapore), and a UART 16550A compatible serial port (Intel Corp., Santa Clara, CA).

Visual stimulus presentation

For a precise timing of the presentation and duration of visual stimuli, it is necessary to synchronize the stimulus on- and offsets with the video hardware. If this is not done, the reported presentation onsets and durations can be inaccurate in the range of several milliseconds. On video cards implementing the OpenGL specification in version 2.0 or higher with the addition of the extension (in our experience, newer Nvidia and ATI cards work well, whereas we experienced several problems with Intel cards; more detailed information on hardware compatibility can be found at the Expyriment website), Expyriment supports the following mechanism to guarantee maximal visual timing accuracy: If the video card allows, visual stimulus presentation is synchronized to the refresh rate of the display (i.e., the vertical retrace). This will make sure that drawing to the display will always begin in the top left corner. Importantly, code execution will be blocked until this synchronization has actually occurred. This has the important implication that the time at which Expyriment reports a stimulus being presented is actually the time that the stimulus is being drawn onto the display. By definition, presentation durations will thus always be exact multiples of one screen refresh.

It should be noted that the mechanism described above can be switched off by calling before the experiment initialization. This will switch off OpenGL mode and use the Pygame library to present visual stimuli. Importantly, this results in presentations that are not synchronized with the refresh rate of the screen, which thus increases the uncertainty of when exactly a visual stimulus has been displayed. However, unsynchronized presentation can be useful in some cases, for instance in paradigms in which the main focus lies on rapidly changing visual scenes, so that screen updates should occur without any delay (e.g., moving objects or studies with eye-contingent displays). Furthermore, Expyriment will automatically switch to using Pygame when running in window mode (by calling before experiment initialization, or by working in develop mode).

To test the timing accuracy of visual stimulus presentation in OpenGL mode, we repeatedly presented alternating preloaded black and white blank screens on the display as quickly as possible. Display responses at the upper left corner were measured using an optical sensor (photocell) connected to a Tektronix MSO 2012 oscilloscope (Tektronix, Beaverton, OR). Before each stimulus presentation, a marker was sent to the oscilloscope via the serial port. The results revealed that the onset and offset of the white blank screen were aligned to the markers sent via the serial port, showing that the time that Expyriment reported the stimulus as being presented corresponded correctly to the time that the video card began drawing onto the display (Fig. 2a). Furthermore, the spacing between the onsets of successive stimulus presentations corresponded to about 17 ms, showing that Expyriment is capable of presenting one preloaded stimulus each screen refresh (Fig. 2b).

Fig. 2
figure 2

Timing accuracy of visual stimulus presentation: a A single presentation of a white blank screen. b Consecutive presentations of white and blank screens. Yellow lines indicate markers sent via the serial port before and after presenting a stimulus, and cyan lines the display response, measured with an optical sensor

Importantly, whereas these test results provide an empirical basis for the timing accuracy of visual stimulus presentation, they are also specific to our particular configuration of system components (video hardware and driver). It is thus worth mentioning that any system’s visual stimulus presentation performance should be tested by using the integrated Expyriment test suite. Users can thus get a clear picture of whether or not their specific systems are capable of accurate stimulus presentation.

Auditory stimulus presentations

When presenting auditory stimuli, timing accuracy is affected by two phenomena: (1) The actual point in time at which the audio stream is played back by the system can be delayed by several milliseconds, depending on the audio hardware and driver used, and (2) the amount of this delay might vary between presentations. Although a static delay does not necessarily face a problem for most experimental settings, since all experimental conditions will be subject to the same time lag, a large variability in this delay would be problematic, as it would introduce differences between the experimental conditions.

We tested the timing accuracy of auditory stimuli by presenting a sine wave of 440 Hz. The audio system was set to play back with a sample rate of 44,100 Hz and a bit depth of 16. The buffer size was set to 128 samples. Audio performance was measured by a Tektronix MSO 2012 oscilloscope (Tektronix, Beaverton, OR) connected to the output of the sound card. Before each stimulus presentation, a marker was sent to the oscilloscope via the serial port. We were able to measure a minimal latency of 15 ms (Fig. 3a) and a maximal latency of 20 ms (Fig. 3b). These results show that the audio playback delay was relatively stable, with a variance of 5 ms.

Fig. 3
figure 3

Timing accuracy of auditory stimulus presentation: a Minimal measured delay of audio playback. b Maximal measured delay of audio playback. Yellow lines indicate markers sent via the serial port before and after presenting a stimulus, and cyan lines the audio response, measured directly from the line output of the audio interface

Serial port communication

Another important aspect of experimental timing concerns measuring response times from a participant. Because response times are usually reported in milliseconds, a participant’s response should be measurable with a precision of up to 1 ms. We therefore strongly discourage the use of a computer’s keyboard for this task, as both PS/2 and USB keyboards are not build for timing accuracy in this range. Rather, we advise using response devices connected to the serial port. The serial port can also be utilized, for instance, to receive triggers from a magnetic resonance imaging scanner or to send markers to an electroencephalography system. In Expyriment, the serial port can be accessed (“polled”) directly in order to allow for maximal performance.

We tested the timing accuracy of serial port communication by repeatedly sending a single byte to a custom-made loop-back device (a device that immediately sends back anything it receives) and recorded the time between sending out the byte and receiving it again. We conducted this test with baud rates of 115,200 and 19,200. Our results revealed that after 1,000 repetitions, the maximal time between sending and receiving was 0.28 ms with a baud rate of 115,200, and 0.69 ms with a baud rate of 19,200, showing that both sending and receiving serial port data were reliably possible within 1 ms.

Benchmark experiment

To test the performance of Expyriment (in default OpenGL mode) in a more realistic setting that would be closer to actual experimental paradigms (i.e., entailing a stimulus–response loop), we developed a benchmark experiment with automated responses to visual and auditory stimuli. The benchmark test focused on the operation systems Windows and Linux, since the times of visual presentations under OS X are less precisely controllable, due to the lack of a possibility to block the execution of programming code until the vertical retrace has occurred. Although developing experiments is perfectly possible on OS X, we generally discourage the use of OS X for testing participants.

Method

The experiment consisted of two parts, each containing 1,000 trials. In the first part, automated responses to a white blank screen were recorded (cf. Mathôt et al., 2012). Each trial started with the presentation of a black blank screen for 100 ms, followed by the presentation of a white blank screen. Responses were triggered by an optical sensor attached to the left upper screen of the display, every time that the brightness exceeded a certain threshold. After the response was recorded, a new trial started. In the second part, automated responses to a single cycle of a beep tone with a frequency of 1,000 Hz were recorded by a custom-made response device connected to the output of the audio card, every time that the sound level exceeded a certain threshold. After the response was recorded, the next trial started after a delay of 100 ms.

Results and conclusions

Table 1 lists the main results of the benchmark experiment. On both Windows and Linux systems, the average response to a visual stimulus presentation was reliably below 2 ms. The observed difference in response times between Windows and Linux might result from a difference in the points of time at which OpenGL reports the vertical retrace (i.e., shortly before the retrace on Linux, and shortly after the retrace on Windows). Crucially, however, the measured timing accuracy was very stable, as indicated by the low standard deviations. The results show that Expyriment is very well-suited for highly timing-critical visual presentations with millisecond precision.

Table 1 Results of the benchmark experiment

The average response to an auditory stimulus presentation for both systems was below 20 ms and, most importantly, was relatively stable. The difference in response times between the two operating systems most probably resulted from differences in the audio hardware used. These results suggest that Expyriment is well capable of handling paradigms that entail auditory stimuli.

Differences from other Python libraries

Expyriment contributes to a pool of existing Python libraries with similar aims and functionality, such as PsychoPy (Peirce, 2007), OpenSesame (Mathôt et al., 2012), or Vision Egg (Straw, 2008). Besides our belief that any open-source contribution is a potentially welcome addition, offering an alternative choice to the end user (cf. Halchenko & Hanke, 2012), Expyriment differs from other libraries in some respects because of its different approach.

First, Expyriment is meant to be strictly a programming library and has no aims to become a graphical experiment builder like OpenSesame (which, in fact, uses Expyriment as the default internal back end for stimulus presentation) or a hybrid solution like PsychoPy. Expyriment therefore provides a relatively small and lightweight Python library for behavioural and neuroimaging experiments, with only very few software dependencies and no requirements to include any system-specific compiled code. As a result, Expyriment ensures high performance together with a maximum in portability.

A second important difference as compared with previous Python-based experiment software is that Expyriment can present stimuli without using OpenGL. The user needs to be aware that this might likely result in increased variability in the timings of the stimulus presentations. However, a presentation mode without OpenGL offers the interesting possibility of developing experiments on computer systems that do not support this type of graphics standard. Taken together, Expyriment is especially appropriate for experiments that are supposed to run on older or low-end systems or on other types of computing hardware, such as tablet PCs or low-cost embedded platforms such as the Raspberry Pi.Footnote 5 Furthermore, an Expyriment runtime for Android is under development and is currently available as a preview version. In any case, the Expyriment test suite offers an easy way of checking the stimulus presentation and input recording accuracy on a specific system.

Third, as mentioned above, Expyriment is also a tool for the creation and manipulation of experimental designs—even in the absence of any stimulus presentation procedure or trial handling (as implemented, e.g., in PsychoPy). Although this feature is useful (and has already been used) in isolation to teach the formalization of experimental designs to students, using it in conjunction with the rest of the library provides an intuitive way of transitioning the conceptualization of an experimental design to its technical implementation.

Since Expyriment focuses on the accurate timing of preloaded static stimuli, at this time it has shortcomings in the real-time generation and presentation of complex and dynamic visual stimuli. In these cases, we suggest the use of, for instance, PsychoPy, which provides several excellent high-level routines for the development of stimulus materials required, especially, for vision research.

Summary

In the present article, we have presented Expyriment, a lightweight Python library for designing and conducting cognitive and neuroscientific experiments. By means of an example experiment, we demonstrated how Expyriment assists the researcher, first by producing readable source code that can be easily shared with and understood by other researchers or students, and also by being centered around the experimental design instead of stimulus presentation, which allows the researcher to conceptualize the experiment in a familiar form and makes for an easy transition from the conceptual level of an experiment to its concrete implementation in terms of programming code, setting Expyriment apart from previous Python libraries. Both of these aspects, together with the fact that Expyriment is freely available and runs not only on Windows and Mac OS, but also on Linux (of which many distributions are free as well), make it a very suitable tool for teaching. Furthermore, we demonstrated that Expyriment is capable of delivering millisecond precision for presenting visual stimuli and communicating with external devices. Due to its modular approach, Expyriment can also be used in conjunction with other Python libraries and can be easily extended using a unified plugin system. Taken together, Expyriment provides an easy, efficient, and flexible way to design and conduct timing-critical behavioral and neuroimaging experiments for researchers and students alike, independent of the choice of operating system.