A low-cost touchscreen operant chamber using a Raspberry Pi™
The development of a touchscreen platform for rodent testing has allowed new methods for cognitive testing that have been back-translated from clinical assessment tools to preclinical animal models. This platform for cognitive assessment in animals is comparable to human neuropsychological tests such as those employed by the Cambridge Neuropsychological Test Automated Battery, and thus has several advantages compared to the standard maze apparatuses typically employed in rodent behavioral testing, such as the Morris water maze. These include improved translation of preclinical models, as well as high throughput and the automation of animal testing. However, these systems are relatively expensive, which can impede progress for researchers with limited resources. Here we describe a low-cost touchscreen operant chamber based on the single-board computer, Raspberry PiTM, which is capable of performing tasks similar to those supported by current state-of-the-art systems. This system provides an affordable alternative for cognitive testing in a touchscreen operant paradigm for researchers with limited funding.
KeywordsCognition Touchscreen operant chamber Operant behavior Raspberry Pi Arduino Automation
Operant-based behavioral tasks are standard techniques used in experimental psychology in which a rodent learns to press a lever or turn a wheel to receive an appetitive or aversive response (Crawley, 2007; Skinner, 1938). Standard operant paradigms, such as fixed-ratio (in which a reward is delivered every nth lever press) or variable-ratio (in which a reward is delivered after a pseudorandom number of lever presses) training, have been used to investigate addiction, impulsivity, and motivation (Halladay, Kocharian, & Holmes, 2017; Perry, Larson, German, Madden, & Carroll, 2005; Salamone & Correa, 2002). These operant-based tasks have been further developed over the years, particularly through the implementation of a computer touchscreen in place of levers. Touchscreen operant chambers have been used in a variety of species including rodents (McTighe, Mar, Romberg, Bussey, & Saksida, 2009), birds (Cook, 1992), dogs (Range, Aust, Steurer, & Huber, 2008), and reptiles (Mueller-Paul et al., 2014). The development of a touchscreen platform for behavioral testing has allowed new methods for cognitive assessment in preclinical models (Bartko, Vendrell, Saksida, & Bussey, 2011; Bussey et al., 2012; Horner et al., 2013; Nithianantharajah et al., 2015). These methodologies are comparable to the human neuropsychological tests employed by the Cambridge Neuropsychological Test Automated Battery, such as the pairwise associative learning (PAL) task and the trial-unique nonmatching to location (TUNL) task (Bartko et al., 2011; Bussey et al., 2012; Kim, Romberg, et al., 2015b; Mar et al., 2013; Nithianantharajah et al., 2015; Talpos, Winters, Dias, Saksida, & Bussey, 2009). Just as patients in the clinic use an iPad/computer to respond to visual and audio cues during neurocognitive assessment, rodents can view a computer touchscreen and respond in a similar fashion (via nose pokes rather than finger touches) during behavioral testing in an operant chamber. Very often the rodent tasks have visual stimuli similar or identical to the stimuli used for testing in the clinic. Using this platform, the rodent is presented with an image on the computer screen and, depending on the task paradigm, is trained to respond to either the specific image or location of the image via nose pokes on the touch-sensitive computer screen. A correct response elicits a food reward, whereas an incorrect response triggers a timeout. Through repeated trials the rodent’s performance can be assessed and the underlying neurobiology required for the task can be studied. Currently, several tasks are available that assess different aspects of cognitive function and associated neurophysiology, such as visual discrimination and reversal learning, the five-choice serial reaction time task, and the continuous performance test, which all measure executive functions, such as cognitive flexibility, decision making, and attention, and have been shown to be sensitive to prefrontal cortex manipulation in rats and mice (Kim, Hvoslef-Eide, et al., 2015a; Mar et al., 2013). In addition, the location discrimination and TUNL tasks, which measure spatial learning, have been shown to be dependent on adult hippocampal neurogenesis and an intact hippocampal formation in rats and mice (Clelland et al., 2009; Creer, Romberg, Saksida, van Praag, & Bussey, 2010; McTighe et al., 2009; Oomen et al., 2013; Talpos, McTighe, Dias, Saksida, & Bussey, 2010). Similarly, the PAL task has been shown to be sensitive to glutamatergic inactivation of the hippocampus in rats (Talpos et al., 2009). Furthermore, impaired performance in the PAL task has been shown in patients with schizophrenia (Wood et al., 2002), and PAL performance has been identified as a predicative measure of Alzheimer’s disease pathology (Swainson et al., 2001).
The touchscreen operant platform for behavioral assessment in animals has several advantages relative to the standard maze apparatus commonly employed in rodent behavioral testing, such as the Morris water maze or radial arm maze. First, it enables the design of tasks that better represent human neuropsychological tests thus it is highly translatable. For example, audiovisual stimuli as well as the task paradigm itself, such as the PAL task, can be set up so that they are identical to those used in tasks for humans (Talpos et al., 2009). Second, the touchscreen operant platform can be used to conduct behavioral assessments as part of a test battery. Although this is also the case for tasks using standard maze apparatuses, such as the Morris water maze or radial arm maze, the touchscreen platform enables a consistent environment and behavioral response/reward system, thereby reducing any potential confounds from employing different maze equipment and paradigms. Third, the platform is automated thus a number of chambers can be used simultaneously for behavioral assessments. This increases the throughput of experimental animals and reduces the burden of labor on the experimenter. Although the touchscreen system has advantages over standard maze paradigms, current systems can cost upward of €25,000 for a four-chamber system. This can be prohibitively expensive for researchers with limited resources, as is often the case for early-career scientists or those in the developing world. Thus, due to the relatively low cost of the components, the option of building a touchscreen chamber in-house is both attractive and viable. Indeed, several groups have already reported building low-cost operant chambers. Steurer, Aust, and Huber (2012) demonstrated a low-cost touchscreen operant chamber that could be used by a variety of species, such as pigeons, tortoise and dogs. This system was significantly cheaper than commercial alternatives, at approximately €3,000. Moreover, work by Pineño (2014) further reduced the price point of an in-house system, by building a low-cost touchscreen operant chamber using a touch-sensitive iPod and an Arduino microcontroller. This group was the first to demonstrate a low-cost touchscreen operant chamber using off-the-shelf electronics for a fraction of the cost of commercially available alternatives, at only a few hundred euros. Although the system is innovative, it is limited in its ability to facilitate the running of similar tasks to that of the current state-of-the-art systems, such as the Bussey–Saksida chambers given the small touchscreen display, although the addition of an iPad with a larger screen may help to overcome this limitation (Pineño, 2014). It is worth pointing out that the original aim of this study was to showcase a proof of concept that off-the-shelf components could be used to build a low-cost alternative, and thus lay the foundation for future work. Since then, Devarakonda, Nguyen, and Kravitz (2016) built a Rodent Operant Bucket (ROBucket), a standard operant chamber based on the Arduino microcontroller. The system consisted of two nose-poke sensors and a liquid delivery system capable of both fixed-ratio and progressive-ratio training that can be used to train mice to nose poke a receptacle for a sucrose solution (Devarakonda et al., 2016). Moreover, Rizzi, Lodge, and Tan (2016) built a low-cost rodent nose-poke chamber using the Arduino microcontroller. Their system was composed of four nose-poke modules that detected and counted head entries. Rizzi et al. successfully trained mice to prefer the nose-poke module, which would trigger an optogenetic stimulation of dopaminergic neurons within the ventral tegmental area. Although both Devarakonda et al. and Rizzi et al. demonstrated low-cost alternatives, these systems are designed as standard operant chambers and therefore do not allow for the similar translatable tasks available within a touchscreen operant platform. Here, we build on the previous work by Pineño, Devarakonda et al., and Rizzi et al. by combining the single-board Raspberry PiTM computer and 7-in. Raspberry Pi touchscreen with an Arduino microcontroller. We demonstrate that this low-cost touchscreen operant chamber is capable of supporting a number of tasks similar to those enabled by current state-of-the-art systems, such as autoshaping animals to nose-poke for a food response, as well as more complex paradigms such as visual discrimination and the PAL and TUNL tasks.
The Raspberry Pi is a single-board computer, roughly the size of a credit card. Despite its size and inexpensive price (approx. €30), the Pi runs a full computer operating system and is capable of supporting the same tasks as a typical desktop PC—for instance, word processing and web browsing. In addition, the Raspberry Pi has several general purpose input–output (GPIO) pins. GPIO pins are generic pins on an integrated circuit whose function can be programmed by the user. For example, they can be programmed to receive specific input (i.e., reading a temperature sensor) or deliver a certain output (i.e., moving a servo motor). In addition, the Raspberry Pi touchscreen is a fully integrated touch-sensitive display that runs natively on the Raspberry Pi. The combination of a full PC operating system, touch-sensitive display, easy hardware integration through the GPIO pins, and inexpensive price makes the Raspberry Pi a very powerful platform for electronic projects, and therefore an ideal basis for a touchscreen operant chamber. This article describes a low-cost touchscreen operant chamber based on the Raspberry Pi, a single-board computer system.
Materials and method
List of components of the Raspberry Pi chamber
Raspberry Pi 2 Model B ARMv7
7-in. touchscreen display for the Raspberry Pi
Arduino Uno microcontroller
Buzzer (Local electronics store)
Pack of white LEDs
IR break-beam sensor 5-mm LEDs
Continuous Rotation Servo FeeTech FS5103R
PVC pipe for food magazine
Pack of assorted electrical wire
Two male Sprague-Dawley rats (ten weeks old, bred in-house) were used to validate the Raspberry Pi touchscreen system. An additional group consisting of three male Sprague-Dawley rats (eight weeks old) was obtained from Envigo Laboratories (The Netherlands) and trained in the standard Med Associates touchscreen operant chamber for comparison in training performance. The rats were group-housed in standard housing conditions (temperature 22 °C, relative humidity 50%) on a 12-h light/dark cycle (0730–1930). Water and rat chow were available ad libitum prior to food restriction. Rats were food restricted to 90% of their free-feeding weight so as increase their motivation to seek out a food reward within the touchscreen operant paradigm. All experiments were conducted in accordance with the European Directive 2010/63/EU, and under an authorization issued by the Health Products Regulatory Authority Ireland and approved by the Animal Ethics Committee of University College Cork.
Behavioral autoshaping protocol
Rats were food-deprived, with body weight maintained at 90% of their free-feeding weight during operant training so as to increase their motivation to seek out a food reward. The autoshaping protocol was adapted from Horner et al. (2013) and was composed of three stages that served to shape the animals to touch the touchscreen for a food reward. Stage 1 involved habituation to the testing chambers for 30 min for two consecutive days, with ten pellets dispensed within the food magazine. Criteria for the animal to progress to the next stage of training was that all pellets were consumed within the 30-min session. The food magazine light was illuminated during food delivery and was switched off upon food collection. The house light was off, and no images were displayed on the screen. Stage 2 involved associating the displayed image with a food reward. Two images (white squares) were presented simultaneously for 30 s in two locations (left and right), separated by 5 cm. If no touch had occurred after 30 s, a food pellet was dispensed, and the food magazine was illuminated and a tone (1 s, 3 kHz) was sounded. If the image was touched by the animal, a reward (1 × 45 mg food pellet) was dispensed immediately and concurrently with the tone (1 s, 3 kHz), and the food magazine light was switched on. Upon reward collection, the magazine light was switched off and an intertrial interval (ITI) began (5 s), following which a new trial began. The session ended after 30 trials or 30 min, whichever came first. The criteria for the animals to progress to the next training stage was to complete 30 trials in 30 min. Stage 3 involved associating the image touch with a food reward. The protocol was the same as for Stage 2, except that the animal had to touch the displayed image to receive a reward. The session ended after 100 trials or 60 min. The criteria for the animals to complete the final stage of training was to complete 60 trials in 60 min for at least two consecutive days.
Stage 1: Habituation
During Stage 1, two rats were habituated to the Raspberry Pi chamber environment over two days. During these two habituation days, both rats ate the ten food pellets within the food receptacle, and both were therefore advanced to the next stage of training. An additional three rats were similarly habituated to the Med Associates operant chamber. Likewise, the rats ate all ten food pellets within the food receptacle during the two habituation days and were thus advanced to the next stage of training.
Stage 2: Image/reward pairing
Stage 3: Touch response
Here we describe a low-cost touchscreen operant chamber based on the Raspberry Pi, a single board computer system. Specifically, two rats were successfully trained to nose poke two white squares in a low-cost touchscreen operant chamber and their performance was compared to rats trained in a standard Med Associates touchscreen operant chamber. Both rats trained in the low-cost Raspberry Pi system reached the learning criteria of 60 trials within 60 min on two consecutive days within ten days. For comparison with a commercially available system, three rats were trained in the standard Med Associates touchscreen operant chamber. Rats trained in the Med Associates chamber reached the learning criteria of 60 trials within 60 min on two consecutive days within four days of testing. Previous studies have shown similar levels of performance and training acquisition as reported here in the Raspberry Pi system. Specifically, Horner et al. (2013), Mar et al. (2013), and Oomen et al. (2013) reported that learning criteria was reached within five days, and Sbisa, Gogos, and van den Buuse (2017) reported successful training after 13 days. Although we observed a slower acquisition rate of rats trained in the Raspberry Pi system, it may be due to the design of the reward collection receptacle itself (a piece of PVC pipe). For example, in the Raspberry Pi system, delivery of the food pellet may land in the front or back of the delivery chute (PVC pipe), leading to slight inconsistencies in the reward placement and subsequently affecting task acquisition. This limitation will be overcome by further optimization of the collection receptacle. Nevertheless, our data demonstrate that the present system is a potential viable, low-cost alternative to the current state-of-the-art systems.
Notwithstanding, a number of improvements and alterations could be applied to our system to advance its development. For example, the acquisition rate of the animals could be improved by the use of “screen masks” that aid the animal’s response to specific active windows of the touchscreen where an image is presented. Screen masks physically cover the touchscreen except for the response windows where the image is presented, therefore encouraging the rodent’s attention and nose-pokes to the specific area of the screen that will elicit a food reward. This would help shape the animal’s response and improve task acquisition. Furthermore, the Perspex rectangular box described here could easily be changed to a trapezoid box, which has been suggested as a means to help focus the attention of an experimental animal toward the touchscreen, thereby improving task acquisition. We report an overall cost of the touchscreen chamber of approximately €160, which, as of the date the manuscript was submitted, was substantially less than the previous estimate of USD300 reported by Pineño (2014). This price could be further reduced by elimination of the Arduino microcontroller. Here we used the Arduino to control the IR beam in order to detect reward collection. The Arduino could be removed and the IR senor controlled by the Raspberry Pi, thus reducing the overall cost of the hardware by approximately €20.
It should be noted that a limitation of the low-cost approach is that each program has to be programmed individually, which requires both time and programming knowledge. Moreover, the present system runs a .py file from within the python IDLE (Integrated Development and Learning Environment), and therefore requires some programming knowledge to operate once it is set up. This limitation could be overcome by the development of a graphical user interface (GUI). A GUI would allow for a better end-user experience, similar to that of the current top-end systems, such as the Med Associates system used in the present study. The GUI could also facilitate other functionality, such as data analysis and task building for future behavioral assessment. Although the development of a GUI would require significant work, it would also enable the adoption of low-cost alternative systems by less technologically savvy researchers. Indeed, Pineño (2014) developed a GUI that allowed the wireless pairing of the iPod touch within the operant chamber with a second iOS device, such as an iPhone or iPad, for graphing and monitoring the animal’s behavior during the experimental session. In the short term, the program presented here could also be improved by better data-handling capabilities, similar to those described by Pineño. Currently, the program simply records a “1” to a text file after every correct response, and the numbers are summed at the end of the program to generate a basic performance score. This could be improved by including response latencies, reward collection latencies, and screen touches during the ITI as measures of preservation, as well as a heat map of screen touches throughout the session to aid detection of location bias for individual animals.
In summary, our work has advanced previous work by Pineño (2014), Devarakonda et al. (2016), and Rizzi et al. (2016) by combining the Raspberry Pi and a 7-in. touchscreen display with an Arduino microcontroller to create a low-cost touchscreen operant chamber capable of performing tasks such as the autoshaping task and other more complex paradigms, such as the PAL or TUNL, that are available in the Med Associates and other state-of-the-art commercially available systems. This low-cost alternative system will provide researchers who have limited funding with a viable option to carry out cognitive testing in a touchscreen operant platform. Although the chamber described here is a prototype and requires some knowledge of programming and electronics by the user in order to operate it, it demonstrates that low-cost systems are capable of conducting similar behavioral tasks to those of the high-end commercially available systems.
This work was funded by Science Foundation Ireland (SFI) under Grant Number SFI/IA/1537. The authors declare no conflict of interest.
- Bartko, S. J., Vendrell, I., Saksida, L. M., & Bussey, T. J. (2011). A computer-automated touchscreen paired-associates learning (PAL) task for mice: Impairments following administration of scopolamine or dicyclomine and improvements following donepezil. Psychopharmacology, 214, 537–548. https://doi.org/10.1007/s00213-010-2050-1 CrossRefPubMedGoogle Scholar
- Bussey, T. J., Holmes, A., Lyon, L., Mar, A. C., McAllister, K. A., Nithianantharajah, J., … Saksida, L. M. (2012). New translational assays for preclinical modelling of cognition in schizophrenia: The touchscreen testing method for mice and rats. Neuropharmacology, 62, 1191–1203. https://doi.org/10.1016/j.neuropharm.2011.04.011 CrossRefPubMedGoogle Scholar
- Clelland, C. D., Choi, M., Romberg, C., Clemenson, G. D., Jr., Fragniere, A., Tyers, P., … Bussey, T. J. (2009). A functional role for adult hippocampal neurogenesis in spatial pattern separation. Science, 325, 210–213. https://doi.org/10.1126/science.1173215 CrossRefPubMedPubMedCentralGoogle Scholar
- Horner, A. E., Heath, C. J., Hvoslef-Eide, M., Kent, B. A., Kim, C. H., Nilsson, S. R., … Bussey, T. J. (2013). The touchscreen operant platform for testing learning and memory in rats and mice. Nature Protocols, 8, 1961–1984. https://doi.org/10.1038/nprot.2013.122 CrossRefPubMedPubMedCentralGoogle Scholar
- Kim, C. H., Hvoslef-Eide, M., Nilsson, S. R., Johnson, M. R., Herbert, B. R., Robbins, T. W., … Mar, A. C. (2015a). The continuous performance test (rCPT) for mice: A novel operant touchscreen test of attentional function. Psychopharmacology, 232, 3947–3966. https://doi.org/10.1007/s00213-015-4081-0 CrossRefPubMedPubMedCentralGoogle Scholar
- Kim, C. H., Romberg, C., Hvoslef-Eide, M., Oomen, C. A., Mar, A. C., Heath, C. J., … Saksida, L. M. (2015b). Trial-unique, delayed nonmatching-to-location (TUNL) touchscreen testing for mice: sensitivity to dorsal hippocampal dysfunction. Psychopharmacology, 232, 3935–3945. https://doi.org/10.1007/s00213-015-4017-8 CrossRefPubMedPubMedCentralGoogle Scholar
- Nithianantharajah, J., McKechanie, A. G., Stewart, T. J., Johnstone, M., Blackwood, D. H., St. Clair, D., … Saksida, L. M. (2015). Bridging the translational divide: identical cognitive touchscreen testing in mice and humans carrying mutations in a disease-relevant homologous gene. Scientific Reports, 5, 14613. https://doi.org/10.1038/srep14613 CrossRefPubMedPubMedCentralGoogle Scholar
- Oomen, C. A., Hvoslef-Eide, M., Heath, C. J., Mar, A. C., Horner, A. E., Bussey, T. J., & Saksida, L. M. (2013). The touchscreen operant platform for testing working memory and pattern separation in rats and mice. Nature Protocols, 8, 2006–2021. https://doi.org/10.1038/nprot.2013.124 CrossRefPubMedPubMedCentralGoogle Scholar
- Perry, J. L., Larson, E. B., German, J. P., Madden, G. J., & Carroll, M. E. (2005). Impulsivity (delay discounting) as a predictor of acquisition of IV cocaine self-administration in female rats. Psychopharmacology, 178, 193–201. https://doi.org/10.1007/s00213-004-1994-4 CrossRefPubMedGoogle Scholar
- Skinner, B. F. (1938). The behaviour of organisms: An experimental analysis. New York: Appleton-Century.Google Scholar
- Swainson, R., Hodges, J. R., Galton, C. J., Semple, J., Michael, A., Dunn, B. D., … Sahakian, B. J. (2001). Early detection and differential diagnosis of Alzheimer’s disease and depression with neuropsychological tasks. Dementia and Geriatric Cognitive Disorders, 12, 265–280. https://doi.org/10.1159/000051269 CrossRefPubMedGoogle Scholar
- Talpos, J. C., McTighe, S. M., Dias, R., Saksida, L. M., & Bussey, T. J. (2010). Trial-unique, delayed nonmatching-to-location (TUNL): A novel, highly hippocampus-dependent automated touchscreen test of location memory and pattern separation. Neurobiology of Learning and Memory, 94, 341–352. https://doi.org/10.1016/j.nlm.2010.07.006 CrossRefPubMedPubMedCentralGoogle Scholar
- Talpos, J. C., Winters, B. D., Dias, R., Saksida, L. M., & Bussey, T. J. (2009). A novel touchscreen-automated paired-associate learning (PAL) task sensitive to pharmacological manipulation of the hippocampus: a translational rodent model of cognitive impairments in neurodegenerative disease. Psychopharmacology, 205, 157–168. https://doi.org/10.1007/s00213-009-1526-3 CrossRefPubMedGoogle Scholar
- Wood, S. J., Proffitt, T., Mahony, K., Smith, D. J., Buchanan, J.-A., Brewer, W., … Pantelis, C. (2002). Visuospatial memory and learning in first-episode schizophreniform psychosis and established schizophrenia: a functional correlate of hippocampal pathology? Psychological Medicine, 32, 429–438. https://doi.org/10.1017/S0033291702005275 CrossRefPubMedGoogle Scholar