Operant-based behavioral tasks are standard techniques used in experimental psychology in which a rodent learns to press a lever or turn a wheel to receive an appetitive or aversive response (Crawley, 2007; Skinner, 1938). Standard operant paradigms, such as fixed-ratio (in which a reward is delivered every nth lever press) or variable-ratio (in which a reward is delivered after a pseudorandom number of lever presses) training, have been used to investigate addiction, impulsivity, and motivation (Halladay, Kocharian, & Holmes, 2017; Perry, Larson, German, Madden, & Carroll, 2005; Salamone & Correa, 2002). These operant-based tasks have been further developed over the years, particularly through the implementation of a computer touchscreen in place of levers. Touchscreen operant chambers have been used in a variety of species including rodents (McTighe, Mar, Romberg, Bussey, & Saksida, 2009), birds (Cook, 1992), dogs (Range, Aust, Steurer, & Huber, 2008), and reptiles (Mueller-Paul et al., 2014). The development of a touchscreen platform for behavioral testing has allowed new methods for cognitive assessment in preclinical models (Bartko, Vendrell, Saksida, & Bussey, 2011; Bussey et al., 2012; Horner et al., 2013; Nithianantharajah et al., 2015). These methodologies are comparable to the human neuropsychological tests employed by the Cambridge Neuropsychological Test Automated Battery, such as the pairwise associative learning (PAL) task and the trial-unique nonmatching to location (TUNL) task (Bartko et al., 2011; Bussey et al., 2012; Kim, Romberg, et al., 2015b; Mar et al., 2013; Nithianantharajah et al., 2015; Talpos, Winters, Dias, Saksida, & Bussey, 2009). Just as patients in the clinic use an iPad/computer to respond to visual and audio cues during neurocognitive assessment, rodents can view a computer touchscreen and respond in a similar fashion (via nose pokes rather than finger touches) during behavioral testing in an operant chamber. Very often the rodent tasks have visual stimuli similar or identical to the stimuli used for testing in the clinic. Using this platform, the rodent is presented with an image on the computer screen and, depending on the task paradigm, is trained to respond to either the specific image or location of the image via nose pokes on the touch-sensitive computer screen. A correct response elicits a food reward, whereas an incorrect response triggers a timeout. Through repeated trials the rodent’s performance can be assessed and the underlying neurobiology required for the task can be studied. Currently, several tasks are available that assess different aspects of cognitive function and associated neurophysiology, such as visual discrimination and reversal learning, the five-choice serial reaction time task, and the continuous performance test, which all measure executive functions, such as cognitive flexibility, decision making, and attention, and have been shown to be sensitive to prefrontal cortex manipulation in rats and mice (Kim, Hvoslef-Eide, et al., 2015a; Mar et al., 2013). In addition, the location discrimination and TUNL tasks, which measure spatial learning, have been shown to be dependent on adult hippocampal neurogenesis and an intact hippocampal formation in rats and mice (Clelland et al., 2009; Creer, Romberg, Saksida, van Praag, & Bussey, 2010; McTighe et al., 2009; Oomen et al., 2013; Talpos, McTighe, Dias, Saksida, & Bussey, 2010). Similarly, the PAL task has been shown to be sensitive to glutamatergic inactivation of the hippocampus in rats (Talpos et al., 2009). Furthermore, impaired performance in the PAL task has been shown in patients with schizophrenia (Wood et al., 2002), and PAL performance has been identified as a predicative measure of Alzheimer’s disease pathology (Swainson et al., 2001).

The touchscreen operant platform for behavioral assessment in animals has several advantages relative to the standard maze apparatus commonly employed in rodent behavioral testing, such as the Morris water maze or radial arm maze. First, it enables the design of tasks that better represent human neuropsychological tests thus it is highly translatable. For example, audiovisual stimuli as well as the task paradigm itself, such as the PAL task, can be set up so that they are identical to those used in tasks for humans (Talpos et al., 2009). Second, the touchscreen operant platform can be used to conduct behavioral assessments as part of a test battery. Although this is also the case for tasks using standard maze apparatuses, such as the Morris water maze or radial arm maze, the touchscreen platform enables a consistent environment and behavioral response/reward system, thereby reducing any potential confounds from employing different maze equipment and paradigms. Third, the platform is automated thus a number of chambers can be used simultaneously for behavioral assessments. This increases the throughput of experimental animals and reduces the burden of labor on the experimenter. Although the touchscreen system has advantages over standard maze paradigms, current systems can cost upward of €25,000 for a four-chamber system. This can be prohibitively expensive for researchers with limited resources, as is often the case for early-career scientists or those in the developing world. Thus, due to the relatively low cost of the components, the option of building a touchscreen chamber in-house is both attractive and viable. Indeed, several groups have already reported building low-cost operant chambers. Steurer, Aust, and Huber (2012) demonstrated a low-cost touchscreen operant chamber that could be used by a variety of species, such as pigeons, tortoise and dogs. This system was significantly cheaper than commercial alternatives, at approximately €3,000. Moreover, work by Pineño (2014) further reduced the price point of an in-house system, by building a low-cost touchscreen operant chamber using a touch-sensitive iPod and an Arduino microcontroller. This group was the first to demonstrate a low-cost touchscreen operant chamber using off-the-shelf electronics for a fraction of the cost of commercially available alternatives, at only a few hundred euros. Although the system is innovative, it is limited in its ability to facilitate the running of similar tasks to that of the current state-of-the-art systems, such as the Bussey–Saksida chambers given the small touchscreen display, although the addition of an iPad with a larger screen may help to overcome this limitation (Pineño, 2014). It is worth pointing out that the original aim of this study was to showcase a proof of concept that off-the-shelf components could be used to build a low-cost alternative, and thus lay the foundation for future work. Since then, Devarakonda, Nguyen, and Kravitz (2016) built a Rodent Operant Bucket (ROBucket), a standard operant chamber based on the Arduino microcontroller. The system consisted of two nose-poke sensors and a liquid delivery system capable of both fixed-ratio and progressive-ratio training that can be used to train mice to nose poke a receptacle for a sucrose solution (Devarakonda et al., 2016). Moreover, Rizzi, Lodge, and Tan (2016) built a low-cost rodent nose-poke chamber using the Arduino microcontroller. Their system was composed of four nose-poke modules that detected and counted head entries. Rizzi et al. successfully trained mice to prefer the nose-poke module, which would trigger an optogenetic stimulation of dopaminergic neurons within the ventral tegmental area. Although both Devarakonda et al. and Rizzi et al. demonstrated low-cost alternatives, these systems are designed as standard operant chambers and therefore do not allow for the similar translatable tasks available within a touchscreen operant platform. Here, we build on the previous work by Pineño, Devarakonda et al., and Rizzi et al. by combining the single-board Raspberry PiTM computer and 7-in. Raspberry Pi touchscreen with an Arduino microcontroller. We demonstrate that this low-cost touchscreen operant chamber is capable of supporting a number of tasks similar to those enabled by current state-of-the-art systems, such as autoshaping animals to nose-poke for a food response, as well as more complex paradigms such as visual discrimination and the PAL and TUNL tasks.

The Raspberry Pi is a single-board computer, roughly the size of a credit card. Despite its size and inexpensive price (approx. €30), the Pi runs a full computer operating system and is capable of supporting the same tasks as a typical desktop PC—for instance, word processing and web browsing. In addition, the Raspberry Pi has several general purpose input–output (GPIO) pins. GPIO pins are generic pins on an integrated circuit whose function can be programmed by the user. For example, they can be programmed to receive specific input (i.e., reading a temperature sensor) or deliver a certain output (i.e., moving a servo motor). In addition, the Raspberry Pi touchscreen is a fully integrated touch-sensitive display that runs natively on the Raspberry Pi. The combination of a full PC operating system, touch-sensitive display, easy hardware integration through the GPIO pins, and inexpensive price makes the Raspberry Pi a very powerful platform for electronic projects, and therefore an ideal basis for a touchscreen operant chamber. This article describes a low-cost touchscreen operant chamber based on the Raspberry Pi, a single-board computer system.

Materials and method

Hardware

The main components of the touchscreen operant chamber were a Raspberry Pi 2 (Raspberry Pi Foundation, UK), a 7-in. touchscreen display for the Raspberry Pi (Raspberry Pi Foundation, UK), and an Arduino Uno microcontroller (Arduino, Italy) (Figs. 1a and b). All components were purchased from Adafruit Industries, USA. The touchscreen display was connected to the Raspberry Pi and mounted within a Perspex box (35.6 × 23.4 × 22.8 cm), which was housed within a sound-attenuating box (63.5 × 43.2 × 42.2 cm) (Med Associates, USA). On the opposite side of the Perspex box was a food magazine, which consisted of a food hopper connected to a pellet delivery chute, made from a PVC pipe. A servo motor within the hopper dispenses a 45-mg pellet, which falls down the delivery chute and into the collection receptacle after each correct response (Figs. 1a and b). The food hopper was controlled by a servomotor attached to the Raspberry Pi (Fig. 2). An LED light within the collection receptacle signaled a reward, and an infrared (IR) beam detected the collection of the food pellet. The IR beam/sensor was connected to the Arduino Uno, which was in turn connected to the Raspberry Pi via a USB port (Fig. 2). A Piezo buzzer within the Perspex box was used to signal the delivery of the food pellet and was also controlled by the Raspberry Pi (Fig. 2). For a detailed list of the components and their associated prices at the time of publication, see Table 1. The commercially available Med Associates touchscreen operant chamber (consisting of a rectangular operant box with grid flooring, overhead light, touchscreen, and food hopper; Med Associates, USA) was used for comparison.

Fig. 1
figure 1

Raspberry Pi touchscreen operant chamber. The Raspberry Pi and touchscreen were mounted to a Perspex box, with a food magazine and collection receptacle equipped opposite to the display (a). Top-down view of the Raspberry Pi chamber (b). The touchscreen chamber was placed inside a sound-attenuating box

Fig. 2
figure 2

Wiring diagram of the Raspberry Pi and Arduino: The servomotor was connected to the Raspberry Pi 5-V pin, GND pin, and GPIO Pin 17. The food magazine LED was connected to the GPIO Pin 18 and GND pin. The Piezo buzzer was connected to the GPIO Pin 23 and GND pin. The Arduino was connected to the Raspberry Pi via a USB port. The infrared beam-break sensor was connected to the 5-V pin, 3.3-V pin, GND pin, and GPIO Pin 4 of the Arduino

Table 1 List of components of the Raspberry Pi chamber

Software

A program to control the main functionality of the touchscreen chamber was written in Python (version 3.1.1), a high-level programming language utilizing the pygame library (https://www.pygame.org/news), which ran on the Raspberry Pi (Fig. 3). Briefly, the program displayed two images (two white squares) on the screen. Once either image was touched (e.g. nose-poked by the rat), the program moved the attached servomotor, located within the food hopper, which in turn dispensed a food pellet. Simultaneously, a tone was played through a buzzer, and an LED light within the food receptacle was turned on to signal reward delivery. An infrared (IR) beam within the food receptacle detected collection of the food reward. The next trial then began, and the same process was repeated. A second program was written in the Arduino sketch, which signaled an IR beam-break detection in the food collection receptacle. The code for the Arduino sketch was adapted from Adafruit.com example code (https://learn.adafruit.com/ir-breakbeam-sensors/overview). Each correct response was written to a text file and saved to the Raspberry Pi. These data were used to determine the animal’s performance during each session.

Fig. 3
figure 3

Flowchart of the autoshaping program: The program to run the touchscreen chamber consisted of a basic loop function in which images were displayed on the screen and, if touched, triggered a “correct response” condition. This in turn activated a servo motor that dispensed a food pellet as well as playing a tone and turning an LED light on. The program looped for 60 minutes

Experimental design

Two male Sprague-Dawley rats (ten weeks old, bred in-house) were used to validate the Raspberry Pi touchscreen system. An additional group consisting of three male Sprague-Dawley rats (eight weeks old) was obtained from Envigo Laboratories (The Netherlands) and trained in the standard Med Associates touchscreen operant chamber for comparison in training performance. The rats were group-housed in standard housing conditions (temperature 22 °C, relative humidity 50%) on a 12-h light/dark cycle (0730–1930). Water and rat chow were available ad libitum prior to food restriction. Rats were food restricted to 90% of their free-feeding weight so as increase their motivation to seek out a food reward within the touchscreen operant paradigm. All experiments were conducted in accordance with the European Directive 2010/63/EU, and under an authorization issued by the Health Products Regulatory Authority Ireland and approved by the Animal Ethics Committee of University College Cork.

Behavioral autoshaping protocol

Rats were food-deprived, with body weight maintained at 90% of their free-feeding weight during operant training so as to increase their motivation to seek out a food reward. The autoshaping protocol was adapted from Horner et al. (2013) and was composed of three stages that served to shape the animals to touch the touchscreen for a food reward. Stage 1 involved habituation to the testing chambers for 30 min for two consecutive days, with ten pellets dispensed within the food magazine. Criteria for the animal to progress to the next stage of training was that all pellets were consumed within the 30-min session. The food magazine light was illuminated during food delivery and was switched off upon food collection. The house light was off, and no images were displayed on the screen. Stage 2 involved associating the displayed image with a food reward. Two images (white squares) were presented simultaneously for 30 s in two locations (left and right), separated by 5 cm. If no touch had occurred after 30 s, a food pellet was dispensed, and the food magazine was illuminated and a tone (1 s, 3 kHz) was sounded. If the image was touched by the animal, a reward (1 × 45 mg food pellet) was dispensed immediately and concurrently with the tone (1 s, 3 kHz), and the food magazine light was switched on. Upon reward collection, the magazine light was switched off and an intertrial interval (ITI) began (5 s), following which a new trial began. The session ended after 30 trials or 30 min, whichever came first. The criteria for the animals to progress to the next training stage was to complete 30 trials in 30 min. Stage 3 involved associating the image touch with a food reward. The protocol was the same as for Stage 2, except that the animal had to touch the displayed image to receive a reward. The session ended after 100 trials or 60 min. The criteria for the animals to complete the final stage of training was to complete 60 trials in 60 min for at least two consecutive days.

Results

Autoshaping task

Stage 1: Habituation

During Stage 1, two rats were habituated to the Raspberry Pi chamber environment over two days. During these two habituation days, both rats ate the ten food pellets within the food receptacle, and both were therefore advanced to the next stage of training. An additional three rats were similarly habituated to the Med Associates operant chamber. Likewise, the rats ate all ten food pellets within the food receptacle during the two habituation days and were thus advanced to the next stage of training.

Stage 2: Image/reward pairing

During Stage 2, image offset was paired with the food reward. Initially, both rats in the Raspberry Pi chamber only completed approximately ten trials per session (Figs. 4a and b). However, after five days of training, both rats completed 30 trials within 30 min (Figs. 4a and b). Therefore, both rats were advanced to the next stage of training. The rats trained in the Med Associates chamber outperformed the rats using the Raspberry Pi system by completing 100 trials in one 60-min training session (Figs. 4c–e), so they were advanced to the next stage of training after one session.

Fig. 4
figure 4

Autoshaping. Completed trials during Stage 2 in the Raspberry Pi system (a and b) and in the Med Associates system (c–e). Completed trials during Stage 3 in the Raspberry Pi system (f and g) and in the Med Associates system (h–i)

Stage 3: Touch response

During Stage 3, the rats were required to touch the image for a food reward. Initially, performance by Rat 1 in the Raspberry Pi chamber was quite low, in that only three or four trials were completed within the 60-min session. However, after five days of training Rat 1 had completed 63 trials and 73 trials, respectively, on two consecutive days within the 60-min session (Fig. 4f). Similarly, the performance of Rat 2 in the Raspberry Pi chamber was initially inconsistent with training, with only six trials completed on the first day, followed by 62 trials on Day 2 but then only 17 trials on Day 3. However, after five days of training, Rat 2 completed 112 trials on two consecutive days within the 60-min session (Fig. 4g). During Stage 3, the rats in the Med Associates chambers quickly reached the learning criteria. Specifically, Rat 3’s performance was quite low on the first day of training; however, this performance quickly improved, resulting in the completion of 96 and 100 trials on Training Days 2 and 3, respectively (Fig. 4h). Similarly, Rats 4 and 5 completed 67 and 81 trials on Day 1, and 98 and 100 trials on Training Day 2, respectively (Figs. 4i and j). We directly compared the performance of the rats during Stage 3 in both systems, to show that the rats trained in the Raspberry Pi system were slower to reach the learning criteria than the rats trained in the Med Associates system (Fig. 5). However, all rats had reached a similar level of performance by Days 4 and 5 (Fig. 5), indicating that all rats had learned to touch the image for a food response, regardless of the touchscreen operant chamber system used.

Fig. 5
figure 5

Comparison of training performance: Completed trials during Stage 3 for rats trained in the Raspberry Pi or the Med Associates system

Discussion

Here we describe a low-cost touchscreen operant chamber based on the Raspberry Pi, a single board computer system. Specifically, two rats were successfully trained to nose poke two white squares in a low-cost touchscreen operant chamber and their performance was compared to rats trained in a standard Med Associates touchscreen operant chamber. Both rats trained in the low-cost Raspberry Pi system reached the learning criteria of 60 trials within 60 min on two consecutive days within ten days. For comparison with a commercially available system, three rats were trained in the standard Med Associates touchscreen operant chamber. Rats trained in the Med Associates chamber reached the learning criteria of 60 trials within 60 min on two consecutive days within four days of testing. Previous studies have shown similar levels of performance and training acquisition as reported here in the Raspberry Pi system. Specifically, Horner et al. (2013), Mar et al. (2013), and Oomen et al. (2013) reported that learning criteria was reached within five days, and Sbisa, Gogos, and van den Buuse (2017) reported successful training after 13 days. Although we observed a slower acquisition rate of rats trained in the Raspberry Pi system, it may be due to the design of the reward collection receptacle itself (a piece of PVC pipe). For example, in the Raspberry Pi system, delivery of the food pellet may land in the front or back of the delivery chute (PVC pipe), leading to slight inconsistencies in the reward placement and subsequently affecting task acquisition. This limitation will be overcome by further optimization of the collection receptacle. Nevertheless, our data demonstrate that the present system is a potential viable, low-cost alternative to the current state-of-the-art systems.

Notwithstanding, a number of improvements and alterations could be applied to our system to advance its development. For example, the acquisition rate of the animals could be improved by the use of “screen masks” that aid the animal’s response to specific active windows of the touchscreen where an image is presented. Screen masks physically cover the touchscreen except for the response windows where the image is presented, therefore encouraging the rodent’s attention and nose-pokes to the specific area of the screen that will elicit a food reward. This would help shape the animal’s response and improve task acquisition. Furthermore, the Perspex rectangular box described here could easily be changed to a trapezoid box, which has been suggested as a means to help focus the attention of an experimental animal toward the touchscreen, thereby improving task acquisition. We report an overall cost of the touchscreen chamber of approximately €160, which, as of the date the manuscript was submitted, was substantially less than the previous estimate of USD300 reported by Pineño (2014). This price could be further reduced by elimination of the Arduino microcontroller. Here we used the Arduino to control the IR beam in order to detect reward collection. The Arduino could be removed and the IR senor controlled by the Raspberry Pi, thus reducing the overall cost of the hardware by approximately €20.

It should be noted that a limitation of the low-cost approach is that each program has to be programmed individually, which requires both time and programming knowledge. Moreover, the present system runs a .py file from within the python IDLE (Integrated Development and Learning Environment), and therefore requires some programming knowledge to operate once it is set up. This limitation could be overcome by the development of a graphical user interface (GUI). A GUI would allow for a better end-user experience, similar to that of the current top-end systems, such as the Med Associates system used in the present study. The GUI could also facilitate other functionality, such as data analysis and task building for future behavioral assessment. Although the development of a GUI would require significant work, it would also enable the adoption of low-cost alternative systems by less technologically savvy researchers. Indeed, Pineño (2014) developed a GUI that allowed the wireless pairing of the iPod touch within the operant chamber with a second iOS device, such as an iPhone or iPad, for graphing and monitoring the animal’s behavior during the experimental session. In the short term, the program presented here could also be improved by better data-handling capabilities, similar to those described by Pineño. Currently, the program simply records a “1” to a text file after every correct response, and the numbers are summed at the end of the program to generate a basic performance score. This could be improved by including response latencies, reward collection latencies, and screen touches during the ITI as measures of preservation, as well as a heat map of screen touches throughout the session to aid detection of location bias for individual animals.

In summary, our work has advanced previous work by Pineño (2014), Devarakonda et al. (2016), and Rizzi et al. (2016) by combining the Raspberry Pi and a 7-in. touchscreen display with an Arduino microcontroller to create a low-cost touchscreen operant chamber capable of performing tasks such as the autoshaping task and other more complex paradigms, such as the PAL or TUNL, that are available in the Med Associates and other state-of-the-art commercially available systems. This low-cost alternative system will provide researchers who have limited funding with a viable option to carry out cognitive testing in a touchscreen operant platform. Although the chamber described here is a prototype and requires some knowledge of programming and electronics by the user in order to operate it, it demonstrates that low-cost systems are capable of conducting similar behavioral tasks to those of the high-end commercially available systems.

Author note

This work was funded by Science Foundation Ireland (SFI) under Grant Number SFI/IA/1537. The authors declare no conflict of interest.