Key words

1 Introduction

Experiments in modern neuroscience require techniques capable of monitoring (“reading”) and manipulating (“writing”) neural activity across a staggering range of spatiotemporal scales. For instance, ion channels have nanometer dimensions (Fig. 1a) and undergo conformational changes on micro- to millisecond timescales, whereas neuronal circuits in human brains span decimeters (Fig. 1d) and can be refined over the course of a lifetime. Due to the minimally invasive nature of infrared photons in brain tissue, a plethora of optical technologies, based on multiphoton excitation, spanning these spatiotemporal scales have been developed and the toolbox of optical actuators and indicators of neural activity has continuously expanded and evolved. As a result of this rapid multidisciplinary progress, optical activation and inhibition of genetically defined classes of neurons can now be achieved using a variety of light-gated actuators (mainly channelrhodopsins) and neural activity can be detected using highly specific and sensitive fluorescent probes; including calcium and voltage indicators. The field of optogenetics in neuroscience has matured to such an extent that two-photon all-optical experiments can be performed in-vivo, whereby signals from multiple neurons constituting neural circuits distributed across different brain regions can be elicited and recorded optically, with single-cell and sub-millisecond precision [2].

Fig. 1
A representation of different spatiotemporal scales used in neurophysiology experiments, named from a to h. a to d contains diagrams where each zooms into the next which are in the following order of scale, 10 power negative 9, 10 power negative 5, 10 power negative 3, 10 power negative 2, e to g contain graphs between millivolts and milliseconds. Image h contains 2 sets of charts.

Different spatiotemporal scales encountered in all-optical neurophysiology experiments. Relevant spatial (ad) and temporal (eh) scales encountered in all-optical neurophysiology experiments. (a) Ion channels and pumps, with nanometer dimensions, residing within the cell membrane are ultimately responsible for the excitability of individual neurons. (b) All-optical neurophysiology experiments aiming for photoactivation with single-cell resolution target the neuronal soma (~10 μm diameter). (c) Neurons distributed within millimeter volumes that display coordinated activity are termed neural ensembles or engrams. The primary goal of a growing number of all-optical neurophysiology experiments is to manipulate these functionally defined circuits. (d) Neural activity governing a particular behavior is commonly distributed across multiple, often non-contiguous, brain regions which can span mesoscale (mm–cm) distances. (e) Neurons are depolarized by excitatory inputs (EPSPs) and hyperpolarized by inhibitory inputs (IPSPs) on timescales of tens of milliseconds [1]. (f) Larger and longer changes in membrane potential are sometimes observed when neurons receive multiple synaptic inputs. (g) Action potentials (APs) are fired when the somatic membrane potential is depolarized beyond threshold (−55 mV). Action potentials invert the membrane potential on millisecond timescales. (h) Individual neurons display characteristic patterns of AP firing. Many all-optical neurophysiology experiments (i) simultaneously monitor the dynamic pattern of AP firing in different neurons or (ii) record the firing response of particular neurons to external stimuli (Si) in trials before replaying and manipulating these physiological activity patterns using photostimulation and inhibition

However, since different opsins and reporters can exhibit vastly different photophysical characteristics, it is necessary to optimize all-optical neurophysiology experiments according to the specific requirements of each biological question. All-optical experiments are challenging, and their success relies on the careful selection and co-expression of an appropriate actuator-sensor combination, a suitable photoactivation approach for precise and efficient excitation of the desired population of neurons, and a sufficiently sensitive imaging method capable of recording neural activity without spurious activation of the opsin expressing cells.

This chapter introduces and reviews some of the most important molecular and optical technologies for manipulating and recording neural activity and highlights critical parameters common to most all-optical neurophysiology experiments. Since many of these technologies are described in greater detail in subsequent chapters of this book, we refer the reader to these chapters and instead provide specific technical details for implementing generalized phase contrast (GPC) and temporal focusing. Finally, a detailed protocol for the preparation of mice hippocampal organotypic slices (a commonly used biological preparation) expressing both optical actuators and indicators for all-optical interrogation of neuronal circuits is included.

2 State-of-the-art Technologies for All-Optical Neurophysiology

All-optical neurophysiology experiments require appropriate molecular tools such as light triggered actuators capable of controlling ion fluxes through the cell’s membrane and thus the electrical activity of neurons [3,4,5,6,7,8] and fluorescent probes which provide optical readouts of neural activity [9,10,11,12,13,14]. A wide variety of molecular tools, exhibiting different photophysical properties, have been discovered and engineered to meet this requirement. This chapter will focus on tools capable of optically manipulating and recording of neuronal activity in scattering tissue, which commonly rely on two-photon excitation (2PE) based on the near-simultaneous absorption of two infrared (IR) photons [15,16,17]. The necessity of exploiting non-linear optical phenomena such as 2PE for performing spatially localized experiments in scattering tissue is well documented [18]. 2PE laser scanning microscopy (2PE-LSM) by rapid displacement of a tightly focused, pulsed, laser beam using galvanometric mirrors [19] is the gold-standard technique for imaging in turbid biological tissue and has also been applied to manipulate neural activity [20]. However, since in some cases this approach does not provide sufficient temporal resolution, a large number of different methods have been developed for optimized excitation of channelrhodopsins and indicators of neural activity. This section will provide an overview of the photophysical requirements of the molecular tools commonly used in all-optical neurophysiology experiments, before reviewing some state-of-the-art sequential and parallel 2PE approaches and evaluating their suitability with respect to exciting and imaging these actuators and indicators.

2.1 Photophysical Properties of Common Molecular Tools Used for All-Optical Neurophysiology

Useful technologies for all-optical neurophysiology experiments must be capable of eliciting, suppressing, and recording neural activity on physiologically relevant spatiotemporal scales, highlighted in Fig. 1a–h. In order to describe and relate the photophysical properties of molecular tools to specific physiological benchmarks, some relevant properties of single neurons and neural networks will first be reviewed.

Although the extracellular and cytoplasmic environment of any neuron is electrically neutral, the immediate surrounding of plasma membrane (an electrical isolator) has very thin clouds of negative and positive ions that are differentially spread on its inner and outer surfaces (Fig. 1a) [21]. At rest, the inner cytoplasmic surface has an excess of negative charge with respect to the extracellular side. This electrical gradient is actively generated and maintained by the action of the sodium–potassium pump and the presence of passive ion channels (Fig. 1a), which are ultimately responsible for cellular excitability. The difference in charge distribution across the membrane gives rise to a difference in electric potential, the membrane potential (Vm), which for most neurons has a somatic value of around −70 mV (Fig. 1b). During communication via synaptic transmission between connected neurons (Fig. 1c, d), the Vm of a particular neuron is altered by presynaptic excitatory (depolarizing) and/or inhibitory (hyperpolarizing) inputs (Fig. 1e, f). These perturbations of the resting potential are the so-called post-synaptic potentials (PSPs) and they are processed and integrated by the soma of the cell. If the net sum of multiple input excitatory or inhibitory PSPs, arriving within the membrane time constant, exceeds a threshold value (~ −55 mV), an action potential (spike) is triggered. Action potentials are highly stereotypical electrical signals which re-orient the electric field across the neuronal membrane on millisecond timescales (Fig. 1g). Action potentials typically lead to an elevation of the concentration of cytosolic calcium through voltage-gated calcium channels, which can last an order of magnitude longer than the action potentials themselves. Each spike is communicated to post-synaptic neurons via local and long-range synaptic connections: pre-synaptic neurons release neurotransmitter onto postsynaptic targets, evoking unitary inhibitory or excitatory post-synaptic potentials (uPSPs). Typically, uPSPs are small in amplitude and duration while PSPs resulting from the integration of multiple synaptic inputs have been observed to give rise to longer and larger variations of somatic membrane potential (Fig. 1e, f) [22, 23], and ultimately may result in action potential firing (Fig. 1g). A wide variety of precise patterns and frequencies of action potential firing have been observed, both for individual neurons responding to distinct stimuli and for different types of neurons located in particular brain regions [24] (Fig. 1h). Furthermore, particular patterns of spike firing in many individual neurons has been observed to be closely correlated with changes in the external sensory world [25,26,27], and observations of highly coordinated patterns of activity in ensembles of different neurons [28, 29] have led to one of the central hypotheses of modern neuroscience: that higher brain function arises from the interactions between interconnected neurons [30] (Fig. 1c, d, h). Elucidating the causal relationship between neural circuits and network function requires methods capable of stimulating and silencing neurons to mimic physiological patterns of network activity. This necessitates the observation and subsequent manipulation of rate and spike timing across an ensemble of neurons with sub-millisecond temporal precision.

Following decades of heroic protein engineering efforts, desired populations of neurons in virtually all genetically tractable model organisms can now be engineered to express photosensitive transmembrane proteins known as channelrhodopsins [31]. Channelrhodopsins are ion channels of microbial origin, which can be excited into current-conducting states upon light absorption [32, 33] (Fig. 2a). The first sets of experiments that demonstrated optical control of neuronal activity using channelrhodopsin were based on the heterologous expression of Channelrhodopsin-2 from Chlamydomonas reinhardtii [33, 36, 37]. Since then, a dizzying number of excitatory and inhibitory opsin variants with different mechanistic and operational properties have been discovered and engineered. While the optimal choice of opsin for a given experiment depends on the specific preparation and biological question, usually opsin variants exhibiting large photocurrents, selectivity for relevant ions, high light sensitivity, appropriate channel kinetics, and spectral compatibility are preferred.

Fig. 2
A 6-part illustration of the characteristics of channelrhodopsins. The image a to f describes the magnitude of photo currents in polarized and hyperpolarized cells.

Two-photon characterization of channelrhodopsins. (a) In the simplest conceptual model of the opsin photocycle, ion channels reside in the closed state. Upon light absorption, the channels open, allowing the exchange of ions between the cytosol and extracellular space. Depending on the ion selectivity of the channel and the electrochemical gradient, this flow of ions will hyperpolarize or depolarize the cell membrane and ultimately inhibit or excite the cell. (b) In reality, the opsin photocycle is more complex, but can reasonably be approximated by the so-called four-state model. For more details refer to [34, 35]. (c) Opsins can be characterized using whole-cell voltage patch clamp to measure the currents that flow across the cell membrane under different conditions. Inset: visualization of a characteristic 12-μm diameter holographic spot, typically used for parallel 2P- photoactivation. (d) Photocurrent traces recorded in whole cell voltage patch clamp from CHO (Chinese Hamster Ovary) cells expressing ChRmine as a function of increasing 2P excitation power (920 nm, 12-μm diameter excitation spot, 200 ms continuous illumination, incident powers varied between 0 and 50 mW as indicated in the color bar). The characteristic features of the photocurrent traces (kinetics and peak/stationary photocurrent) are labeled. The magnitude of the photocurrent increases with power density to saturation. (e) 2P-LSM image of AAV9-CaMKIIa-somBiPOLES-mCerulean expressed in hippocampal organotypic slice cultures by bulk infection (scale bar represents 50 μm). (f) Photostimulation (upper, 1100-nm illumination, 0.44 mW/μm2, 5 ms continuous illumination (red bar)) and inhibition (lower, 920-nm illumination, 0.3 mW/μm2, 200 ms illumination during constant current injection (gray bar)) of a single neuron expressing somBiPOLES with a 12-μm diameter holographic spot

Using light to modulate electrical activity in opsin-expressing neurons generally requires generating photocurrents with sufficient magnitudes, within the membrane time constant, to depolarize or hyperpolarize the cells and evoke or inhibit action potentials, although there are interesting and notable exceptions [38]. The precise magnitude of photocurrent necessary to evoke or inhibit spikes depends on the biophysical properties of the membrane such as input resistance, capacitance, and action potential threshold, which can vary significantly between neurons. Furthermore, the absolute photocurrent magnitude that can be generated in a given neuron itself depends on many factors – including the specific properties of the opsin, the degree of expression, the efficiency of membrane targeting, and the photostimulation modality (for instance, single- or multi-photon excitation). Excitatory or inhibitory effects can be elicited by expressing different sub-classes of opsins with specific ion selectivity (for instance, sodium or protons [39, 40] for excitation and chloride or potassium for inhibition) [41, 42]. Since the single-channel conductance of most opsins (~50 fS) is three to four orders of magnitude smaller than that of ion channels endogenously expressed in neurons (~100 pS) [32, 43, 44], optical control of neuronal activity relies on the expression and subsequent excitation of sufficient opsin molecules distributed over an extended region of the cell membrane. This consideration is essential in the case of 2P excitation which is intrinsically spatially confined. Hence, one of the first challenges in optogenetics experiments is achieving reliable, homogeneous, and functional expression of the desired opsin in the membranes of target neurons. As a result of intensive protein engineering efforts, several successful strategies such as codon optimization [45, 46] and membrane trafficking sequence optimization [47] have been developed which enable sufficiently high functional expression levels (~105 opsin molecules per neuron) without detectably perturbing membrane physiology [48]. In particular, soma-targeted opsin variants which utilize the c-terminal targeting motif from the soma localized potassium channel Kv2.1 have exhibited improved membrane localization and enhanced photocurrents [49]. Additionally, variants of inhibitory opsins with similar soma targeting sequences have been demonstrated to result in fewer antidromic spikes [42]. The use of soma-targeted opsins has also been demonstrated to significantly reduce off-target photoactivation [6, 7, 50], which is a crucial consideration for certain applications.

At physiological membrane potentials, exposing channelrhodopsin expressing neurons to the light of an appropriate wavelength causes the light-gated ion channels to open. This allows the passage of specific ions across the cell membrane (according to their electrochemical gradients) and generates photocurrents (I) that can modulate neuronal activity (Fig. 2a, b). As highlighted previously, enhancing or suppressing neural activity using optogenetics requires the excitation of a sufficient number of opsins within the membrane time constant to induce adequate depolarization (or hyperpolarization) of the soma to cause (or prevent) the opening of voltage-gated ion channels. As a result, it is the macroscopic photocurrent parameters that emerge due to the combined action of functional, membrane-localized, channels that are relevant for all-optical neurophysiology experiments and will be discussed throughout this section.

It is possible to quantify and characterize these macroscopic photocurrent parameters using electrophysiology, specifically, using whole-cell voltage clamp (Fig. 2c). For example, in response to continuous illumination, the photocurrent of a channelrhodopsin expressing neuron exhibits a characteristic profile with three main features: (i) an initial peak (Ip) which decays to reach, (ii) a steady state, (stationary) plateau (Is), and finally (iii) a return to baseline in the absence of light. Representative photocurrent traces are plotted for increasing illumination power in Fig. 2d with these main features highlighted. The transitions between these features of the macroscopic photocurrent are commonly parametrized by time constants τon, τin, and τoff for indicating respectively the time it takes for the photocurrent to reach the peak when the channels open, the time for inactivation, and the time to reach zero when the channels close, which typically exhibit millisecond values [47] but vary between channelrhodopsins and can also depend on the intrinsic membrane properties of the cell. The functional profile of this macroscopic photocurrent has extremely important implications for 2P optogenetics experiments since it ultimately dictates the optimum illumination strategy and imposes bounds on temporal resolution, temporal precision (jitter), and spiking rate [48]. Different characteristics of the macroscopic photocurrent are relevant for different paradigms of optogenetic photostimulation. For instance, opsins with fast “off” kinetics are critical for applications which necessitate the induction of spike trains with high temporal fidelity (e.g., sub-millisecond precision) and high firing frequencies [49]. Inducing cell depolarization at faster rates than the kinetics permit can cause prolonged depolarization to the so-called plateau potential, induced by excessive cation influx, which can induce non-uniformity in neuron responses to identical light pulses and, in some cases, cessation of action potential firing [51]. However, it is important to note that opsins with faster τoff kinetics generally require higher light intensities to reach action potential threshold, which might be an important consideration in experiments aiming to simultaneously photostimulate large numbers of neurons [52] where the power for photoactivation must be divided between targets. On the other hand, to reliably inhibit action potential firing during a prolonged interval, the macroscopic photocurrent must exhibit a high steady-state to peak \( \left(\raisebox{1ex}{${I}_{\textrm{s}}$}\!\left/ \!\raisebox{-1ex}{${I}_{\textrm{p}}$}\right.\right) \) ratio and high conductivity of anions throughout the entire photocycle. Influencing neural activity over extremely long periods of time without causing photodamage, for instance to sensitize entire neuronal networks to native activity patterns, benefits from the use of a class of opsins with exceptionally slow kinetics known as step function opsins (SFOs). SFOs can be photoactivated using a single, low intensity light pulse, remain in the “open” state for extended timescales (minutes) and can often be closed using a second pulse of light at a different wavelength [53]. In conclusion, the photocycle kinetics, sensitivity, selectivity, and photocurrent magnitude are commonly the primary considerations when selecting an appropriate opsin for a particular all-optical neurophysiology experiment. Having selected and successfully expressed the channelrhodopsin, the intensity and duration of delivered light must be titrated until the desired neuronal response is reliably elicited. To achieve inhibition and excitation of the same neurons during the same experiment, with different excitation wavelengths, bicistronic constructs such as BiPOLES [54] can be used (Fig. 2e, f). Such constructs are constituted of excitatory and inhibitory channelrhodopsins expressed in tandem for precise stoichiometry.

In all-optical neurophysiology experiments, photostimulation is performed alongside functional imaging, both in order to identify the specific set of neurons to target according to their activity patterns (in response to a particular stimulus) and also to observe how the induced patterns of neural activity affect cellular or network function. Fluorescent reporters sensitive to changes in many different aspects of neuron physiology have been developed, but those responsive to action potentials, such as calcium and voltage indicators, are the most widely used.

Calcium imaging using fluorescent protein sensors has proved particularly useful for all-optical neurophysiology since the activity of large numbers of neurons can be recorded simultaneously [55,56,57,58,59,60]; the same group of neurons can be imaged during extended time periods and can also be compared across different recording sessions. The allure of calcium imaging stems, in part, from the photophysical properties of the optical signal. In mammalian neurons, spiking activity results in a temporary increase of Ca2+ concentration throughout the soma via voltage-gated Ca2+ channels, which open as a result of the change in membrane potential during the action potential (Fig. 3a). This somatic Ca2+ influx may also be amplified by calcium release from intracellular stores [61, 62]. As such, a vast number of freely diffusing calcium indicators distributed throughout the cytosolic volume can collectively report on the occurrence of action potentials. Although action potentials only last a few milliseconds, the duration of the calcium elevation lasts approximately 2 orders of magnitude longer, resulting in a bright, slowly decaying fluorescent signal that can readily be detected with high signal-to-noise ratio (SNR) as illustrated in Fig. 3b. The GFP-based GCaMP family of genetically encoded calcium indicators (GECIs) is used most commonly in all-optical neurophysiology. Multiple rounds of mutagenesis have yielded the latest suite of variants (jGCaMP8) which exhibit different properties optimized for particular applications [63]. While calcium imaging is the most commonly used approach for imaging the activity of large neural populations, the potential pitfalls associated with using a second messenger that exhibits slow kinetics are also widely acknowledged and must be considered [64, 65].

Fig. 3
A 4-part representation of the Calcium and voltage indicators. a. the action potential, represents an increase in fluorescence intensity. b. an organic slice culture with multiple globules. c. A diagrammatic illustration of the voltage-sensing domain. d. an organic slice culture with several distinct round structures.

Calcium and voltage indicators as reporters of neuronal activity. (a) Cytosolic calcium concentration increases temporarily as a result of the change in membrane potential that occurs during an action potential. The intensity-based fluorescent probe GCaMP binds to calcium. This alters the conformation of the circularly permuted GFP chromophore and results in an increase in fluorescence intensity. (b) Left: 2P-LSM image of AAV9-Syn-jGCaMP7s expressed in hippocampal organotypic slice cultures by bulk viral infection (scale bar represents 25 μm). Right: fluorescent jGCaMP7s traces in response to trains of action potentials (5, 15, 40 Hz) evoked by pulsed current injection into a patched neuron (indicated below). (c) In the case of voltage-sensing domain (VSD)-based voltage indicators, a change in membrane potential causes a change in conformation of the VSD, which is covalently linked to a circularly permuted fluorophore. The change in conformation of the fluorophore typically results in a decrease in fluorescence intensity. (d) Left: 2P-LSM image of AAV8-hSyn-ASAP3b expressed in hippocampal organotypic slice cultures by bulk viral infection (scale bar represents 25 μm). Right: simulated ASAP3b traces in response to trains of action potentials (5, 15, 40 Hz) as in (b)

Voltage indicators generate optical signals with magnitudes proportional to changes in membrane potential (Fig. 3c) and can be used to provide a readout of precise action potential timing in addition to sub-threshold depolarizations and hyperpolarizations. At present, genetically encoded voltage indicators (GEVIs) may broadly be divided into three categories: rhodopsin-based indicators [66, 67], hybrid chemogenetic indicators [13], and sensors based on the fusion of a fluorophore to a voltage-sensing domain (VSD) [68, 69], though only the latter category of GEVIs have been demonstrated to be compatible with 2P excitation [70]. Calcium imaging is a much more prevalent technique than voltage imaging, since optically monitoring changes in membrane potential is fundamentally more challenging in terms of signal detection. Firstly, only voltage-sensitive reporters located within a Debye length can report on the membrane potential, and improperly localized GEVIs reduce the sensitivity of optical measurements of membrane potential by increasing background fluorescence. Similarly, as for channelrhodopsins, it has been demonstrated that fusing GEVIs with soma localization motifs improves membrane trafficking and reduces off-target intracellular labeling. While a typical neural soma constitutes around 60% of the entire cell volume, the somatic membrane only accounts for 2–7% of the total cell surface area [71, 72]. As a result, the number of voltage indicators that can report on the membrane potential is less than 0.1% of the number of Ca2+ indicators in the cytosol [73, 74], which places an upper bound on the signal-to-noise ratio of voltage imaging (Fig. 3d) [75]. This is compounded by the fact that action potentials occur on much shorter timescales than the consequent calcium signal, and hence voltage imaging requires much faster sampling rates (>500 Hz and in many cases > kHz, depending on the specific application). Raster scanning is an inefficient approach for detecting membrane-localized signals which account for a small fraction of the field of view (FOV) – and the resulting frame rates are insufficient for population-level voltage imaging. The unifying feature of different approaches optimized for 2P voltage imaging is an increased illumination duty cycle of signal-generating pixels. Such increases in temporal resolution are often achieved at the cost of increased photobleaching, which is compounded by the fact that voltage indicators are replenished slower than calcium indicators because diffusion is much slower in the membrane lipid bilayer than in the cytoplasm, necessitating the use of more robust fluorophores [76]. Additionally, sample motion is more problematic for voltage imaging. Though population voltage imaging is technically more challenging than calcium imaging, it has the potential to provide more physiologically relevant information about the logic and syntax of the neural code and, indeed, is necessary for a subset of all-optical neurophysiology experiments.

2.2 Combining Molecular Tools for All-Optical Neurophysiology Experiments

Compatible actuators and indicators must be carefully selected in order to simultaneously and independently monitor and control neural activity in a single preparation. Firstly, the fluorophore used to aide visualization of opsin-positive cells should generally be spectrally separate from both the opsin and the activity reporter and, should be chosen so as not to occupy precious spectral bandwidths. This is a particularly important consideration in the case of voltage imaging, where any bleed-through, activity-independent, fluorescence degrades precious signal-to-noise ratio and ultimately reduces the detectability of neuronal signals. Most crucially, all-optical experiments generally benefit from employing spectrally orthogonal opsins and activity reporters. Spurious activation of opsin-positive neurons while imaging neural activity can perturb neural networks by altering excitability and inducing changes in synaptic release and plasticity [77]. Imaging artifacts can also be induced due to the excitation of activity reporters during opsin photoactivation, though this is typically less severe since network function is not affected and, ordinarily, these artifacts can be minimized by precisely de-synchronizing photostimulation and imaging (possible at low frame rates such as those used for calcium imaging) or removed during subsequent analysis. Hence, the term “optical crosstalk” is commonly used to describe artefactual photostimulation, induced during imaging in all-optical neurophysiology experiments (for a much more detailed discussion regarding crosstalk during all-optical neurophysiology experiments refer to Chaps. 2, 4 and 5).

Although channelrhodopsin variants with peak single-photon (1P) excitation wavelengths spanning the visible region of the electromagnetic spectrum have been engineered [39, 78], performing crosstalk-free, multi-color experiments is not trivial. Evidently, variants of actuators and reporters from opposing ends of the spectral palette should be chosen. Unfortunately, the action spectra of channelrhodopsins commonly used for 2P optogenetics are typically extremely broad [39]. Furthermore, so-called, red-shifted opsins exhibit persistent “blue tails”, which coincide with wavelengths used for 2P imaging of activity reporters (920–950 nm). A number of different approaches aiming to alleviate this problem have been proposed (see also Chap. 2). Very recently, implementation of spectrally independent excitation beams enabled artifact-free all-optical experiments with GCaMP and red-shifted channelrhodopsins (see also Chap. 4) [79]. Parallel excitation methods have taken advantage of the different sub-cellular distributions of GECIs and opsins [80], though of course this is less applicable in the case of voltage imaging (where both indicator and actuator are membrane localized), and further is not intrinsically robust to sample motion which is problematic for in-vivo applications. An alternative approach is to employ blue-shifted opsins in combination with red-shifted reporters [50, 81]. One benefit of this is that longer wavelength fluorescent photons exhibit longer scattering lengths in biological tissue which should facilitate deeper imaging. While this approach has found success for 1P excitation [67], the two-photon counterpart of this approach has thus far been limited. On the one hand, genetically encoded, red-shifted activity indicators display lower 2P efficacies than green ones and, furthermore, amplified lasers in the spectral region adequate for photostimulating several cells expressing blue-shifted opsins (920–950 nm) have only recently become available [81]. Another approach to minimize crosstalk is to use opsins with fast kinetics and optimize the raster-scanned trajectory used to image GCaMP activity to minimize the accumulation of photocurrent during the membrane time constant. Although this method does not eliminate sub-threshold network perturbation, the (relatively) fast repolarization of neurons expressing opsins with short τoff values means they are unlikely to fire due to depolarization induced by the scanned imaging beam. Of course, successful employment of this method requires careful titration of different imaging conditions, including imaging power, frame rate, and field of view as an interim approach until high efficacy blue-shifted opsins, red-shifted activity indicators [82], and amplified lasers in the appropriate spectral range are developed.

A final subtle point to note when combining actuators and indicators in all-optical neurophysiology experiments is that sustained opsin activation can alter the conditions of the intra- and extracellular environment [83], which could impact the behavior of the opsin, the excitability of the neuron, and also the fluorescent yield of the activity reporter [84], while long-term effects such as changing chloride concentration could influence the entire network. Each of these factors should be considered when drawing conclusions about neural activity based on fluctuations in the fluorescent signal.

2.2.1 Expressing Molecular Tools in Specific Populations of Neurons for All-Optical Neurophysiology Experiments

To perform all-optical neurophysiology experiments, neurons must be genetically modified in order to induce the expression of actuators and indicators in specific populations of neurons, typically via promoter-operating expression specificity. Examples of ubiquitous promoters that can be used to drive expression of actuators and indicators in a broad set of neurons and that are strongly and persistently active in a wide range of cells are the hSyn (human synapsin) promoter and the synthetic mammalian-specific promoter CAG. A variety of approaches exist for gene delivery based on the molecular signatures, projection patterns, anatomical organization, and functional activity of neurons [85]. Viral approaches, electroporation, and constitutive expression in transgenic animals have all been utilized. The most commonly used strategy to date is viral transduction. Viral vectors can be delivered directly to specific brain regions using stereotaxic, intracranial injections, yielding long-term expression and high transgene levels which is especially important in the case of promoters with low transcriptional activity [86]. The degree of viral spread (and hence transgene expression) from the injection site varies with both virus serotype and tissue type [87]. In general, for rodent brains, opsin gene expression reaches functional levels within 3 weeks after adeno-associated virus (AAV) injection. Another approach, single-cell electroporation, provides a much greater degree of control of protein expression patterns than viral transduction and can be used to deliver longer segments of DNA. Using electroporation, an exact set of neurons can be transfected with precise amounts of a single plasmid or with mixtures of plasmids with well-defined ratios [88]. Alternatively, specific cortical layers can be targeted with in utero electroporation [89]. Transgenic animals are also invaluable for all-optical experiments but can be expensive and time-consuming to generate. Before establishing transgenic lines, it is important to test, characterize, and calibrate appropriate optogenetic actuators and reporters. In vitro dissociated cell cultures represent an important tool for characterizing actuators and indicators in single homogeneous cell populations. However, because the brain’s architecture is lost in the culture process, they are not suitable for studying brain function [90]. Organotypic cultures are becoming a favored preparation for testing new preparations for all-optical neurophysiology experiments (such as new actuator/indicator combinations), since the main network architecture is maintained (Sect. 3.4; Fig. 9d), and it is possible to test many different conditions per animal (10–15 in the case of hippocampal organotypic cultures). A protocol used to produce hippocampal organotypic cultures and perform bulk viral infection is presented in Sect. 3.4 of this chapter. In Fig. 4 we show an all-optical experiment in mice hippocampal organotypic slices, co-expressing the soma-targeted cation channelrhodopsin ST-ChroME and the genetically encoded Ca2+ indicator GCaMP7s (Fig. 4a). Neurons were photostimulated using two-photon excitation with temporally focused 12-μm diameter holographic spots, and their responses were detected by imaging GCaMP using 2P scanning imaging on a standard galvanometric-based setup. 28 of 50 cells yielded calcium transients in response to photostimulation (Fig. 4b, green horizontal arrowheads). During the experiment (~160 s) two synchronous network-wide bursting events were observed (Fig. 4b, vertical arrowheads at the bottom), the first triggered by the direct activation of a hub-like cell (Fig. 4b, cell 15; pink arrowheads and inset) and the second a possible spontaneous event (Fig. 4b, orange arrowhead at the bottom). These events are typically seen in developing hippocampal networks [91], and demonstrate that network function is maintained in organotypic slices. Moreover, the large amplitude of these events reflects the large number and/or frequency of action potential firing in comparison to the fine-tuned control of action potentials evoked by single-cell resolution and sub-millisecond precision of patterned photostimulation, as evidenced by the inset shown in Fig. 4c.

Fig. 4
A representation of the electrophysiology on organotypic slices. a. An organic slice with several circles. b. A photo stimulation curve. c. A line graph with both axes numbered from 0 to 9 with fluctuating line curve.

All-optical electrophysiology in mice hippocampal organotypic slices. (a) Two-photon fluorescence image showing the co-expression of the high performance and soma-targeted cation channelrhodopsin ST-ChroME, here tagged to the chromophore mRuby3 (red colour corresponds to the nuclear localization of mRuby3 reporter) and the genetically encoded Ca2+ indicator GCaMP7s (green) in the CA3 region of a hippocampal organotypic slice. White circles represent the two-photon temporally focused spots delivered to excite 50 different neurons (12 μm spot diameter, 1040 nm wavelength, 0.26 mW/μm2 incident power). Scale bar: 50 μm. (b) Two-photon imaging of GCaMP7s fluorescence signals evoked by the sequential stimulation of the cells (interstimulus interval ~3 s). Gray bars represent the stimulation protocol which consisted of a train of 5 pulses of 5-ms duration at 4 Hz. The identity of the cells during the sequential stimulation is denoted by the blue numbers on top. In this experiment, 28 out of 50 cells yielded calcium transients in response to stimulation (green horizontal arrowheads). During the acquisition time (~160 s) two synchronous network-wide bursting events were observed (vertical arrowheads at the bottom), the first one seemed to be triggered by the direct activation of a hub-like cell (cell number 15 in the sequence; see pink inset), while the second network-wide event seemed to be triggered by the spontaneous activation of a hub-like cell in the circuit. Pink and orange arrowheads denote the evoked or spontaneous nature of the events, respectively. A single event (in only 1 neuron) with similar characteristics to the network-wide bursting events in terms of amplitude and kinetics was observed near the end of the acquisition time (horizontal orange arrowhead). The large amplitude of these events reflects the large number and/or frequency of action potential firing in comparison to the fine-tuned control of firing activity evoked by single-cell resolution and sub-millisecond precision patterned photostimulation as it is observed in the inset in (c)

2.3 State-of-the-art Two-Photon Excitation Approaches for All-Optical Neurophysiology

An extraordinary number of different 2PE technologies have been developed to precisely control neuronal activity using microbial channelrhodopsins and to provide high-fidelity readouts of activity with calcium and voltage indicators. In this chapter, these methods will be broadly categorized as either sequential or parallel methods. While in sequential-2PE a tightly focused beam visits distinct voxels consecutively, parallel-2PE encompasses all methods in which 2PE occurs within a region larger than the diffraction-limited volume.

A wide variety of components capable of rapidly varying the three-dimensional position of a tightly focused beam throughout a volume of interest have been incorporated into 2PE-LSM instruments to increase the temporal resolution of sequential, point-scanned, 2PE. This includes devices such as resonant galvanometric mirrors [92], rotating polygon mirrors [93, 94], acousto-optic deflectors (AOD) [95,96,97,98,99], deformable mirrors [100], spatial light modulators (SLMs) [101,102,103,104], piezoelectric scanners [105], microelectromechanical systems (MEMS) scanners [106], electrically-tunable lenses (ETL) [107], voice-coils [55, 108], and tunable acoustic gradient (TAG) lenses [109]. Other interesting approaches specifically designed to improve volumetric imaging rates rely on the conversion of lateral beam deflections, typically using galvanometric mirrors, into axial displacements at kilohertz rates [110, 111]. Furthermore, in general, the temporal resolution of sequentially scanned-2PE approaches can be improved by optimizing the scan trajectory according to a pre-defined region of interest (ROI) (e.g., Lissajous scanning [105]).

While successful single-cell optogenetic activation using scanning-2PE based on galvanometric mirrors has been demonstrated [3, 112, 113], photostimulation based on pure sequential scanning is incompatible for use with channelrhodopsins with fast kinetics since a large portion of the somal membrane of each neuron must be scanned before the channels begin to close in order to integrate sufficient photocurrent and successfully reach the threshold for action potential firing. Purely sequential raster-scanned-2PE approaches are not capable of high fidelity, co-incident excitation of multiple neurons [102, 114, 115]. Similarly, sequentially scanned-2PE methods have only demonstrated sufficient temporal resolution for voltage imaging by extreme reductions of the field of view to a single line [116] or point [117].

The acquisition rates of scanning-2PE systems can be increased by random-access approaches. These techniques use multiple AODs to rapidly deflect a tightly focused beam to a set of pre-defined three-dimensional locations [98, 118]. Random-access scanning has been successfully applied to both calcium and voltage imaging of up to 20 distinct three-dimensional positions at kilohertz sampling rates [14, 119, 120]. In principle, it is possible to achieve denser spatial sampling than has been demonstrated by random-access scanning; the fundamental limit for unambiguous signal assignment in fluorescence microscopy is the fluorescence lifetime (~ns) [121]. Spatiotemporal multiplexing methods aiming to approach this upper bound have been successfully applied to ultrafast recording of neural activity with calcium and voltage indicators [122, 123]. Furthermore, since the lifetime of common fluorophores is shorter than the pulse separation of common mode-locked lasers used for 2PE, single pulses can be divided into multiple beamlets (diffraction-limited spots), each of which can be laterally or axially displaced to illuminate distinct sample regions at different (although, in some cases, almost simultaneous) times. Fluorescence signals sequentially excited by different beamlets can be de-multiplexed by accurate synchronization of the beam displacement approach with the detector using high-speed electronics [60, 108, 124]. Neglecting scattering, spatiotemporally multiplexed fluorescence from different locations can be unambiguously assigned to its origin provided that the effective dwell-time is longer than the excited state lifetime of the fluorophore.

An alternative approach to increase temporal resolution is to modulate the electromagnetic field and increase the instantaneous volume of excitation using so-called parallel methods. Since the inception of laser scanning microscopy, efforts have been made to increase the extent of the excitation beam and hence reduce the dimensionality of the raster scan required to fully sample the region of interest. For instance, voltage imaging at rates of 15 kHz has been demonstrated by rapidly scanning holographically generated foci using AODs to simultaneously excite large membrane areas [14]. More common variants of this approach, such as line-scanning, increase the excitation extent in a single direction and capture two-dimensional images by scanning in the transverse direction [125]. Widefield temporal focusing takes this concept to its theoretical limit by performing line scanning at the speed of light [126, 127]. Line-scanned tomography has also been used to achieve millisecond-resolved recordings of voltage and calcium indicators [128]. The dimensionality of the excitation beam has also been increased axially to form Bessel and Airy beams [129,130,131,132], for volumetric imaging based on lateral scanning (See also Chap. 10). In many cases, elongated foci result in the projection of axial information onto a two-dimensional recording, which can limit its applicability to sparsely labeled samples. This can be overcome by spatial multiplexing to record stereoscopic information [133].

Moving from one-dimension scanning approaches toward scanless configurations, one class of parallel-2PE approaches use phase modulation to spatially multiplex the excitation beam and simultaneously project multiple foci in three-dimensions to spatially separated sample regions. For instance, spatial light modulators (SLMs) have been used to deflect beamlets to different three-dimensional sample positions through Computer-Generated Holography (CGH) [134, 135] and perform both photostimulation and imaging [80, 104, 136, 137]. The number of beamlets and their position can be dynamically updated up to the SLM refresh rate (~420 Hz for the latest SLM models). Recent innovations such as the combination of overdrive with phase reduction [138], or the sequential illumination of two SLMs [8] have achieved refresh rates in the kHz-range. SLM-based spatially multiplexed calcium imaging has been combined with both single pixel [80] and camera detection [139]. Furthermore, calcium imaging at 1 kHz acquisition rates has been demonstrated by using a microlens array rather than an SLM to generate a grid of beamlets [140]. A common approach for 2P photostimulation combines SLM-based multiplexing with a pair of galvanometric mirrors which laterally sweep each focus in a spiral motion spanning the average soma diameter [8, 102, 103, 115, 140,141,142,144]. This method can simultaneously excite large ensembles of neurons without compromising temporal resolution with respect to the single-cell spiral scanning case (see also Chap. 3). Similarly, as for purely sequential-2PE, the temporal resolution of these hybrid parallel-sequential methods can be improved by upgrading the component responsible for sequential scanning.

Another category of parallel-2PE approaches uses phase modulation to increase the lateral extent of the excitation beam and perform scanless excitation [145]. For instance, CGH using SLMs can be also used to sculpt light into arbitrary shapes. This is generally combined with temporal focusing [146, 147] to preserve axial resolution, which scales linearly with lateral extent for holographic beams and quadratically for loosely focused quasi-Gaussian beams [148]. Techniques for distributing temporally focused light throughout a three-dimensional volume have been developed [114, 149, 150] and low-numerical aperture (NA) temporally focused Gaussian beams [113, 151], CGH, and generalized phase contrast (GPC) have all been applied to photostimulation and imaging [152,153,154,155,156,157,158]. Since parallel (scanless) 2PE methods can simultaneously excite opsins distributed throughout the soma high photocurrents can be efficiently evoked independently of the off kinetics, which facilitates control of neuronal activity with sub-millisecond jitter [158]. Moreover, in contrast to scanning approaches, in parallel approaches, the temporal resolution of the activation process is solely defined by the dwell time of the physiological process, i.e., the necessary time for the beam to remain on-site for evoking the desired physiological effect.

Having excited an indicator of neural activity using one of the methods outlined above, the next challenge is to detect fluorescent emission. Unfortunately, popular calcium and voltage reporters fluoresce in the visible region of the electromagnetic spectrum, although development of activity reporters fluorescent in the infrared (IR) is an active area of research [158,159,161]. Thus, visible photons emitted from fluorophores located deep in scattering tissue will typically experience multiple scattering events prior to detection. This is least problematic for sequentially scanned-2PE methods since all collected fluorescence can reasonably be assumed to have been generated by ballistic photons at the focal region. Hence any signal recorded at a given time can be correctly assigned to the correct spatial location (again provided that the dwell time is longer than the fluorescence lifetime). 2PE imaging methods which record fluorescence from different voxels simultaneously are typically less robust against scattering. Beyond a few scattering lengths, the origin of fluorescent photons becomes ambiguous, which limits the depth of spatially multiplexed methods. Crosstalk can be reduced by increasing the spatial separation between excitation foci, but this is achieved at the cost of maximum acquisition rate for full-frame scanning [140]. Computational methods have also been developed to overcome scattering-induced ambiguity by exploiting priors such as high-resolution spatial maps [144, 162], temporal signatures [163,164,165], or adaptive optics [166,167,168]. Finally, to correctly identify signals from different neurons excited using three-dimensional, spatially multiplexed methods, the effective depth of field of the detection axis must be extended with respect to the widefield case. Common extended DOF approaches include multi-focal plane microscopy [169] and point spread function engineering [170], which encodes information about axial position as lateral changes in intensity.

In spite of the number of technological developments outlined in this section, many 2P all-optical optogenetic studies performed to date have used parallel excitation via CGH (either extended holographic spots or spiral scanning) for photoactivation and galvanometric scanners (both resonant and not), occasionally combined with an ETL for calcium imaging across multiple axial planes [8, 103, 115, 141,142,143, 154, 156, 157, 171]. These studies have already provided novel insights into the principles of neural coding, and it is anticipated that the wider adoption of newer technologies will enable further progress. To assist in this dissemination, the next section will provide specific details about: laser sources required for all-optical neurophysiology experiments, the implementation of Generalized Phase Contrast and Temporal Focusing, and a protocol for preparing hippocampal organotypic slices.

3 Implementation of Methods

3.1 Laser Sources

The feasibility of two-photon all-optical neurophysiology projects is largely contingent on the first element in the optical path, the laser, which ultimately dictates experimental parameters such as the number of neurons that can be probed simultaneously, the maximal speed of interrogation, and which probes can be excited (according to their action spectrum). This section will provide a general review of the different laser characteristics that impact the efficiency of two-photon excitation and describe how the choice of laser can be optimized based on specific experimental parameters.

To review, two-photon excitation occurs when two photons, with sufficient combined energy, are absorbed quasi-simultaneously and a molecule is excited into a higher energy level [172]. The number of photons absorbed per molecule, per unit time, via two-photon absorption (N2P) is proportional to the two-photon-cross section (σ2P) and to the square of the instantaneous intensity (N2P ∝  < I(t)2>). The low values of typical 2PE cross-sections necessitate the use of high time-averaged photon fluxes to excite actuators and indicators at sufficient rates. This can be achieved using mode-locked lasers which generate femtosecond (fs) pulses of light. It is intuitive that, at a given average power, shorter pulses and fewer pulses per unit time result in a greater concentration of photons, which ultimately leads to a higher probability of quasi-coincident two-photon absorption. More formally, the concentration of photons in time can be parametrized according to the laser duty cycle which is defined as the product of repetition rate (frep) and pulse duration (τpulse) and corresponds to the fraction of time per unit interval during which there is irradiance. Prior to saturation, and at a given average power, the rate of two-photon absorption is higher for pulsed lasers as compared with their continuous wave (CW) counterparts by a factor proportional to the inverse duty cycle:

$$ <{N}:{2P}>\propto <I{(t)}^2>\propto \frac{g_{\textrm{p}}<I(t){>}^2}{f_{\textrm{rep}}{\tau}_{\textrm{p}\textrm{ulse}}\kern0.5em } $$

where gp ~0.558–0.664 [173] is a unitless factor which accounts for the fact that real pulses emitted from mode-locked lasers are not rectangular.

In fact, the wide adoption of 2P-LSM was aided by the development and commercialization of reliable, mode-locked lasers which provided enough energy to achieve sufficient rates of two-photon excitation of common fluorophores [174,175,176]. Ti:Sapphire oscillators exhibiting 100 fs pulse widths and 80 MHz repetition rates (12.5 ns pulse separation) have become the workhorses of sequential 2P-LSM since these lasers provide an ~100,000-fold increase in the rate of two-photon excitation as compared with CW excitation at the same average power, allowing 2P-LSM imaging to be performed using much more palatable average powers (milliwatts in comparison to kilowatts). However, these lasers no longer represent the gold standard for all-optical neurophysiology experiments, particularly those in which multiple neurons are probed simultaneously. The larger instantaneous extent of the excitation area in parallel methods, or the division of the original laser beam into a certain number of beamlets for parallel spiral scanning necessitates the use of much higher peak pulse intensities. Two obvious strategies for increasing the pulse energy while maintaining average power are decreasing the pulse width or repetition rate. In practice, some reduction of the pulse width below the standard 100 fs value is possible [177, 178], provided that the spectral width remains narrower than the action spectra of the actuators and indicators (to maintain excitation efficiency). However, this approach requires careful dispersion management, particularly when elements such as SLMs and diffraction gratings are employed in the optical path. Much larger gains can be achieved using amplified lasers with low repetition rates. Ytterbium-doped fiber lasers with central wavelengths in the region of 1030–1040 nm [179] are now commonly used for in-vivo imaging and photostimulation, offering instantaneous powers that are orders of magnitude higher than conventional tunable lasers. The use of Ytterbium-doped fiber amplifiers with microjoule pulse energies is necessary in order to simultaneously photostimulate neural ensembles composed of tens of neurons [5, 6, 103, 143]. Nevertheless, since these systems emit light at fixed wavelengths, the choice of opsin is constrained and multiple lasers with different wavelengths must be used to excite different sensors and actuators. Solutions that offer greater flexibility in terms of wavelength while delivering high energy (microjoule) pulses can be found in systems using optical parametric amplification (OPA) for the generation of the excitation beam [79, 81].

When probing biological preparations with such high irradiances (which can often exceed 1024 photons cm−2 s−1) it is of course necessary to consider the possibility of physiological perturbations. Photoperturbations based on linear absorption processes (N1P ∝  < I(t)>), such as heating (via single-photon absorption) or optical trapping [180, 181], occur throughout the excitation beam while higher-order processes (NnP ∝  < I(t)n>), for instance, photolysis, ablation, and optical breakdown [182,183,184], are confined to the focal region. This is particularly important to consider when choosing the appropriate excitation approach for photostimulation [185]: parallel methods generally use lower power density than spiral scanning but higher average powers. Since the optimum excitation parameters and signal to photoperturbation ratio are likely to be highly dependent on the specific characteristics of the sample preparation, it is advisable to vary the repetition rate, pulse width, and average power in each case if possible [186]. The optimal excitation parameters are likely to be different for different excitation modalities.

3.2 Beam Shaping with Generalized Phase Contrast

As outlined in Sect. 2.3, many parallel two-photon excitation approaches rely on lateral beam sculpting. A correspondingly wide variety of methods based on amplitude or phase modulation have been conceived of and demonstrated experimentally. Phase modulation is generally preferable since it is more power efficient than amplitude modulation. Computer-generated holography (CGH) is currently the most common phase modulation method used for photoactivation or imaging in all-optical neurophysiology experiments. Since CGH is described in detail in other chapters of this book (Chaps. 3, 4, and 11), this section will focus on the principles and implementation of an alternative phase modulation approach: generalized phase contrast (GPC) [187].

GPC is an efficient approach for transverse beam shaping and has been applied to imaging [188, 189], photomanipulation [190,191,192], and atom trapping [193]. GPC patterns have smooth, speckle-free intensity profiles and can be combined with temporal focusing for depth-resolved, robust excitation, deep in scattering tissue [147, 194]. As demonstrated in Fig. 5a, in GPC, the phase imprinted on a beam (using a phase mask or an SLM) is mapped to intensity variations in a conjugate image plane by engineered constructive and destructive interference. The simplest implementations of GPC are based on 4f arrangements of lenses, constructed as follows (Fig. 5a): the first phase modulating element (hereafter SLM) is located a distance f1 prior to the first lens (L1), which has focal length f1, and is referred to hereafter as the Fourier lens. The necessary SLM phase (ϕxy(x,y)) depends on the spatial profile of the desired pattern. For binary GPC, ϕxy = ϕ1 for SLM pixels inside the pattern and ϕxy = ϕ2 for SLM pixels outside of the pattern, ϕ1 = π and ϕ2 = 0 is a simple (and useful) choice. An element known as a phase contrast filter (PCF) is located in the Fourier plane (FP) of L1, and a distance f2 prior to the second lens (L2), which has focal length f2. The PCF applies a selective phase shift to the field in the Fourier plane. The phase shift imparted by the PCF depends on its thickness (d) and refractive index of the substrate (n2): ϕPCF = (2πd(n2n1))/λ, where n1 is the refractive index of the medium surrounding the PCF (usually n1 = 1 for air, and n2 = 1.45 for a PCF fabricated with fused silica). For binary GPC, and ϕ1 = π, ϕ2 = 0, constructive interference in the output pattern occurs for ϕPCF = π. The resulting interference pattern is formed in the image plane (IP) of the second lens, a distance f2 from L2.

Fig. 5
A representation of Wavefront engineering. a. A horizontal arrangement of S L M, L 1, F P, L 2, and I P. The second indicates a phase shift in P C F with one tall peak bound by a short peak on either side. The third amplitude curve has a tall broad peak bound by a small downward parabola on either side. Image b and c has a square with shaded circle and a doughnut structure, respectively, to indicate the patterns of fluorescence.

Wavefront engineering based on Generalized Phase Contrast. (a) (i) Schematic representation of a common configuration for Generalized Phase Contrast. The beam is modulated using an SLM, which is used to impart a phase shift to the portion of the beam corresponding to the desired pattern. The SLM phase should match the desired pattern (up to a magnification factor according to the respective focal lengths of L1 and L2). In the binary case, the SLM is usually used to impart a π phase shift to the pixels within the pattern and 0 to those outside. The synthetic reference beam is the portion that is phase shifted by the phase contrast filter (PCF), which typically imparts a π phase shift relative to the field that does not pass through the PCF, referred to here as the modulated beam. The different portions of the beam are recombined by L2 in the Image Plane (IP), where the modulated and synthetic reference fields interfere to form the desired pattern. (ii) Cartoon representations of the ideal 2D amplitudes and phases of the electric fields in the input (SLM) plane and the output (Image) plane. The phase profile of a typical PCF is shown centrally, with the filter diameter indicated by dashed black lines. (iii) 1D cross sections of the amplitudes and phases of the electric fields in the case of binary circle GPC. (b) 2-photon excited fluorescence from a thin rhodamine layer for two different patterns: circle and ring GPC. Scale bars represent 10 μm

To some extent, the perceived complexity of GPC arises from the number of different parameters that contribute to pattern fidelity. To elucidate the effects of some of these parameters, their impact on three important metrics of pattern quality relevant to two-photon excitation: efficiency, uniformity, and contrast will be discussed. In this context, efficiency is defined as the fraction of total energy contained within the pattern, uniformity as the inverse of the curvature of intensity within the pattern and contrast as difference between the maximum and minimum intensity in the pattern vicinity ((Imax + Imin)/(ImaxImin). While it is generally desirable that these metrics are maximized for two-photon excitation based on sculpted light, this cannot be achieved using low NA Gaussian beams, where uniformity throughout the region of interest (typically the neuronal soma) necessitates use of a large beam waist, resulting in low pattern efficiency. To explain how uniformity and efficiency can be jointly maximized in GPC, we will consider a simple example based on an input Gaussian beam and a simple binary pattern commonly used for two-photon excitation: a circular disk of uniform intensity (Fig. 5a, b).

Consider the propagation of the field modulated by the SLM through the system in the absence of the PCF (Fig. 6a, upper). In the image plane, the modulated field is a magnified image (according to the ratio of f2/f1), of the input field with the imprinted phase profile ϕxy. Given the modulated field in the image plane, it is possible to find the corresponding ideal “reference field”, which, summed with the modulated field would generate the desired pattern with maximal efficiency, uniformity, and contrast (Fig. 6a, lower). This requires total constructive interference between the reference and modulated fields at all positions in the image plane within the pattern and total destructive interference at all positions outside. Achieving this stringent condition requires that the modulated and reference fields:

  1. (a)

    Have identical amplitude outside of the pattern.

  2. (b)

    Have complementary amplitude inside the pattern.

  3. (c)

    Arrive exactly in phase (modulo 2π) within the pattern.

  4. (d)

    Arrive exactly out of phase (modulo 2π) outside of the pattern.

Fig. 6
3 graphs in 2 rows each for phase 0 and at pi, and points to the box below. All the middle column graphs are for the modulated field. First column for desired field or actual field. Third column for ideal or synthetic reference field.

Intuitive optimization of phase contrast filter for GPC. (a) The ideal reference field would generate total constructive interference at all positions in the image plane within the desired pattern and total destructive interference at all positions outside. The ideal reference field can be calculated by subtracting the modulated field (i.e., the magnified image of the field at the SLM plane) from the field corresponding to the desired pattern. The colors of the field profiles represent their phase ϕ, (blue: ϕ = 0 and red: ϕ = π). (b) The Fourier transform (denoted \( \mathcal{F} \)) of this ideal reference field gives its profile in the Fourier plane, where the PCF is located. The profiles of the ideal reference field and modulated field in the Fourier plane are used to guide the choice of an optimal PCF filter in GPC. The optimal PCF parameters are those for which the synthetic reference field most closely matches the ideal reference field. It is clear that this occurs when the PCF imparts a π phase shift and its edges coincide with the first zero crossings of the modulated field (indicated by black dashed lines). (c) Since the synthetic reference field cannot completely match the ideal reference field, there exist some differences between the ideal output field and that which is obtained. In most cases, there is a mismatch between the beam waists of the synthetic reference and the modulated fields, resulting in a “ring-of-light” surrounding the output pattern (highlighted by gray arrows). This is normally blocked by an iris positioned in a conjugate image plane. Furthermore, since the synthetic reference field is typically composed of the low spatial frequency components of the field, there are no small features and the Gaussian profile of the input beam is not compensated for (highlighted by black arrows)

In GPC, the reference field is derived from the input field itself: the portion of the field that is phase shifted by the PCF can be considered a so-called “synthetic reference field” (SRF). The propagation of the SRF through the 4f system can be considered separately from the rest of the field (hereafter referred to as the modulated field), as demonstrated in Fig. 6b. The efficiency, uniformity, and contrast in the output pattern are maximized by finding the properties of the PCF such that the SRF approaches the ideal reference field while the modulated field is minimally perturbed. The optimal characteristics of the PCF for satisfying conditions (a)–(d) in the image plane can be deduced by comparing the profiles of the ideal reference and modulated fields to the synthetic reference field in the Fourier plane (Fig. 6b). For instance, it is clear that for the particular binary example of a disk, the synthetic reference field should be phase shifted by π in order to resemble the ideal reference field (Fig. 6b). Secondly, the edges of the PCF should coincide with the first zero-crossings of the modulated field, and thirdly the form of the phase contrast filter should reflect the symmetry of the desired intensity pattern (for instance, the highest fidelity circular patterns are obtained using circular filters, whereas elliptical patterns would benefit from correctly oriented elliptical filters). More complex patterns would benefit from more complex filter shapes, although high efficiencies (>60%) can still be achieved by using more common circular or rectangular filters. In the case of the circular disk pattern with an appropriately sized PCF, the efficiency of the output pattern is theoretically 70–80% and the maximum intensity is 3× higher than the de-magnified input Gaussian beam [195].

The 30% loss in efficiency is mainly a result of the differences between the synthetic and ideal reference fields. Firstly, the SRF has a narrower diameter and shorter amplitude than the ideal reference field in the Fourier Plane (Fig. 6b). Consequently, the SRF in the image plane is broader and of lower amplitude than the modulated field and condition (a) is not met, resulting in a dim halo of light (Fig. 6c) surrounding the pattern due to partial destructive interference. Note that the extraneous light can be blocked using an iris in a conjugate image plane if problematic. Secondly, since the SRF generated using a PCF to transmit the central lobe is constituted of low spatial frequency components, any sharp features in the synthetic reference wave are precluded. This reduces the uniformity of the pattern with respect to the ideal case – since the SRF retains the Gaussian envelope of the input beam (unless a beam shaper is used prior to the first SLM such that it is illuminated with a top-hat beam [195]). For small filters, the SRF approaches the “DC component” of the incident field – a Gaussian envelope for most experimental configurations. As the filter size increases, the SRF more closely resembles a magnified version of the input field, while the modulated field only contains the high spatial frequency components of the input pattern – for instance, the pattern edges and small features. The best pattern (highest efficiency, uniformity, and contrast) is achieved when the edges of the PCF coincide with the first zero crossings of the modulated field.

The concept of minimizing the differences between the synthetic and ideal reference fields to maximize efficiency, uniformity, and contrast is more general than the simple case of a circular disk example presented and has been verified for a variety of analytically tractable patterns [195, 196]. Experimentally, the properties of the synthetic reference and modulated fields depend on interdependent system parameters such as the diameter and profile of the input beam, the spatial profile of the phase imparted by the SLM (i.e., the desired output pattern), and the focal length of L1. Since these parameters are interrelated, it is useful to introduce a level of abstraction and optimize the efficiency, uniformity, and contrast as a function of ξ and η, where ξ is defined as the ratio of the pattern radius at SLM to the waist of the input Gaussian beam and η is defined as the ratio of the radius of the focused beam in the Fourier plane to the radius of the PCF [196]. For certain patterns, the optimal values of ξ and η can be found analytically, or numerically via simulations for more complex patterns.

The best approach for achieving the theoretically optimal values of ξ and η experimentally depends on the precise constraints of the experimental setup:

  1. (i)

    ξ can be tuned by varying either the waist of the input beam or, by changing the diameter of the pattern on SLM1.

  2. (ii)

    η can be changed by varying the waist of the input beam, the focal length of L1 and the physical diameter of the PCF.

In optical systems necessitating volumetric two-photon excitation, GPC is generally combined with CGH for flexible 3D pattern projection [197] and temporal focusing to improve the axial resolution [147, 150]. In such systems, the downstream parameters are tightly constrained to achieve a field of excitation with a particular extent, and to meet the conditions described in the following section, necessary to achieve optimal temporal focusing. Hence, the extent of the SLM phase pattern for GPC is typically set according to the desired size of the pattern at the focal plane of the microscope objective and L1 kept fixed, while the input beam and PCF diameters are varied in order to optimize efficiency uniformity and contrast in the output pattern. The optimization process for a given set of experiments is eased by making it possible to tune the diameter of the incident beam without altering its divergence (for instance by having a variety of suitable telescopes mounted on switchable magnetic bases) and additionally by imprinting a selection of suitable PCFs (with a range of diameters and shapes) on a phase mask which is then mounted on a three-axis micrometer stage in order to easily be able to transition between PCFs. For a given experiment, and desired sculpted light pattern, the initial choice of PCF diameter is generally guided by simulations. In lieu of simulations, a sensible starting point is to choose a PCF diameter matched to the beam waist of the unmodulated Gaussian beam in the Fourier plane and then to test several PCFs with similar diameters to maximize efficiency, uniformity, and contrast. Using this strategy, efficiencies greater than 70% can be routinely obtained experimentally.

3.3 Implementing Temporal Focusing

Due to their interferometric character, GPC patterns suffer from a lack of axial confinement and optical sectioning [147] (this is in notable contrast to patterns generated using CGH). Temporal focusing has been used to restore axial resolution for GPC, and other extended light patterns that have been used for 2P optogenetics [147, 153, 171, 194]. The implementation of temporal focusing in combination with Gaussian beams has been extensively described in the literature [127, 198,199,200] and more detailed descriptions may be found in this book (Chaps. 4 and 9).

In summary, an optical element placed in a conjugate image plane of the optical path is used to separate the spectral frequencies (hereafter “colors” for simplicity) of the femtosecond laser pulses. While both diffusers or scatterers and diffraction gratings are suitable optical elements, diffraction gratings offer a more efficient directional separation of the different colos and can be used in conjunction with lasers commonly used for multiphoton microscopy (which exhibit characteristic pulse durations of hundreds of femtoseconds, corresponding to pulse bandwidths of tens of nanometers). Beam expanders are commonly used prior to the scatterer or diffraction grating to adjust the beam diameter in order to achieve the desired Gaussian beam size at the sample. The orientation of the grating is usually chosen such that the first order is diffracted perpendicular to the grating (by convention, this corresponds to θdiff = 0° ) (Fig. 7) to avoid tilted illumination of the image plane at the sample in the case of a large ROI illumination or a large field of excitation. However, this is not generally the same angle that would maximize the light throughput for a blazed grating (the so-called Littrow configuration) (see Note 1).

Fig. 7
A diagrammatic representation of the temporal focusing implementation. a. Consists of incident beam, beam expander, mirror, grating, lens, and objective. b. S L M and lens replace the beam expander and mirror. At the inset is a holographic pattern with several rod-shaped structures distributed in a grainy background. c. P C F and lens are used additionally. The holographic pattern is a shaded circle in an evenly shaded rectangle.

Implementation of temporal focusing with light-shaping methods. (a) Temporal focusing of a Gaussian beam. A diffraction grating placed in a conjugate image plane of the optical path is used to separate the spectral frequencies (“colors”) of the femtosecond laser pulses. The grating is illuminated with a parallel Gaussian beam of the appropriate size adjusted through a beam expander, for giving the desired beam size at the sample plane. The orientation of the grating is usually chosen such that the 1st order is diffracted perpendicular to the grating (θdiff = 0° ) to avoid tilted illumination of the image plane. Conjugation of the grating (image) plane to the sample image plane is realized by a telescope consisting of a lens and the microscope objective. (b) In temporal focusing of CGH beams the grating is placed at the image plane of CGH, illuminated with the holographic pattern generated by addressing the corresponding phase on the SLM (inset). (c) Similarly as in CGH, in temporal focusing of GPC beams the grating is illuminated with the intensity pattern generated at the output (image) plane of the GPC configuration, when addressing the SLM with the appropriate, in the simplest case binary, pattern (inset). In all panels θ denotes the incident angle of the light-shaped beam onto the grating. In (b) and (c) the beam expander prior to the SLM is omitted for simplicity. (Adapted from Ref. [201])

The groove density of the diffraction grating, G, and the focal length f of the lens used as the tube lens of the microscope (Fig. 7) should be chosen according to the properties of the microscope objective: achieving the tightest axial confinement requires meeting two conditions. Firstly: the extent of the chirped beam L should fill the diameter of the back focal plane (dbfp); the extent of the chirped beam due to linear dispersion induced by the diffraction grating is:

$$ L=\frac{d\lambda}{\ dx}\Delta \lambda =\frac{f}{d_{\textrm{G}}}\Delta \lambda, $$
(1)

with \( \frac{d\lambda}{\ dx} \) the linear dispersion induced by a grating with groove density \( {d}_{\textrm{G}}=\frac{1}{G} \)(lines/mm), and Δλ the spectral bandwidth. The second condition for maximal axial confinement is that the instantaneous illuminated area of the scatterer/diffraction grating should be imaged to a diffraction-limited spot at the focal plane of the objective, which occurs when:

$$ \frac{c\tau}{\sin \theta}\approx \frac{M\lambda}{2\textrm{NA}}, $$
(2)

where c is the speed of light in vacuum, τ is the laser pulse duration, θ is the incident angle of the light beam on the grating, λ is the wavelength, NA the numerical aperture of the objective, and M, the effective magnification between the scatterer/diffraction grating and the sample. When temporal focusing is combined with light patterning (such as CGH, GPC, or other approaches for intensity modulation), the grating is generally positioned in the output plane of the patterning method. In this case, the conditions outlined above for optimum axial confinement remain valid. However, the dimensions of the beam at the back aperture of the objective depend on the patterning method and may vary with respect to the case of a Gaussian beam. Patterned light generated using GPC (or amplitude modulation) resembles Gaussian beams in the sense that they exhibit smooth phase profiles and large depths of focus (confocal parameters). Changing the pattern changes the illumination of the back aperture but does not improve the axial resolution since the linear dispersion of the diffracted beam is unaffected.

The illumination of the back aperture is different in the case of CGH. CGH setups are usually designed such as to illuminate the entire back aperture of the objective – hence extended CGH spots have intrinsically better axial confinement than Gaussian beams of similar lateral extent. Addition of a diffraction grating at a conjugate image plane in the CGH setup for temporal focusing affects the illumination of the objective back aperture in the dispersive direction. Because in CGH configurations the scatterer/grating is illuminated with a focusing beam, the linear dispersion of the field at the back aperture is strongly dependent on the focal length of the lens used prior to the grating/scatterer. A detailed analytical description of the full-width-at-half-maximum (FWHM) of the illumination distribution along the x and y axis at the back aperture (more precisely the back focal plane) of the objective in a CGH-TF setup may be found in Ref. [149]:

$$ {\textrm{FWHM}}_{{\textrm{x}}_{\textrm{BFP}}}=2\sqrt{2\ln 2}\sqrt{\frac{2{\sigma}^2\cos {\left(\theta \right)}^2{f}_2^2}{f_1^2}+\frac{2{f}_2^2\Delta {\lambda}^2}{{d_G}^2}} $$
(3)
$$ {\textrm{FWHM}}_{{\textrm{y}}_{\textrm{BFP}}}=2\sqrt{2\ln 2}\frac{f_2}{f_1}\sigma $$
(4)

where is the waist of the Gaussian beam illumination at the SLM, and f1, f2 are the focal lengths of the lenses used to conjugate the SLM at the back focal plane of the objective (Fig. 7b). To optimally illuminate the back aperture and maximize axial confinement, focal lengths f1, f2 ought to be chosen according to the constraints set by equations (3), which replaces equation (1), and (4), while also aiming to satisfy equation (2). Since these lenses also dictate the extent of the field of excitation, it may be necessary to compromise field of view for desired axial resolution (or vice versa). For more details refer to Note 2.

Temporal focusing microscopy is inherently a two-dimensional method, since two-photon excitation only occurs in the vicinity of the focal plane of the objective, which, by design, is conjugate to the grating plane. Even if CGH was used to project patterns onto different axial planes, two-photon excited fluorescence would only be excited by the patterns projected onto the grating, the rest being suppressed by temporal focusing. Extending temporally focused excitation to three-dimensional (3D) space requires spatial multiplexing using an SLM in a conjugate Fourier plane following the grating to modulate the phase of each monochromatic beam. Convolution of the temporally focused pattern projected on the grating with the 3D configuration of beamlets at the sample plane creates multiple temporally focused patterns at the position of the beamlets, which are replicas of the original pattern on the grating. A detailed description of the 3D holographic spatial multiplexing implementation on temporally focused Gaussian beams can be found in Chap. 4, describing the technique 3D-SHOT (3D-Scanless Holographic Optogenetics with Temporal focusing) [5, 114].

For greater flexibility in the choice of the excitation shape and size, 3D holographic spatial multiplexing of temporally focused CGH, GPC patterns, or patterns created with amplitude modulation techniques can be used [150]. In those cases what changes in the optical setup is the way the beam is modulated before the grating (Fig. 8). The shape of the beam that is projected onto the grating is the one that is next replicated by 3D point-cloud CGH. For applications where projection of different shapes or spot sizes is necessary, a further variant of the above approaches is to perform light shaping in two dimensions (2D) with a first SLM that is vertically tiled in regions, each one encoding a different pattern, and use the second SLM, also tiled in the same number of regions addressed with different phase profiles that independently control the position in which each pattern is going to be projected at the sample. Such an example is described in reference [150] where the regions of SLM2 are addressed with phase profiles that control only the axial position of the different patterns. A similar approach for projecting different shapes at different positions is also presented in [150].

Fig. 8
A diagram represents multiplexed temporally focused C G H patterns. It consists of the following. Incident beam, S L M 1 and 2, Lenses f 1, f 2, f 3, f 4, grating and objective. The patterns generated are, well-defined nodular structures distributed evenly, broad diagonal lines, and a band of spectra.

Multiplexed temporally focused CGH patterns. Projection of temporally focused patterns in multiple planes consists in a 3-step-approach: 1. beam amplitude shaping, here by CGH, 2. performing temporal focusing, and 3. spatial multiplexing by using a SLM (SLM2) and 3D point-cloud CGH, at a Fourier plane after dispersion of the spectral frequencies on the grating. The pattern generated through phase modulation (inset SLM1) is projected onto the grating (F(X,Y) inset) and replicated by 3D point-cloud CGH to different 3D positions (G(X,Y,Z) inset SLM2). The way SLM2 is illuminated is also shown in the inset. The resulting pattern at the sample is a convolution of patterns F and G. (Reproduced from Ref. [201])

The choice of the different lenses on this kind of configuration is constrained by the requirement of filling the back aperture of the microscope objective used to project patterns into the sample. All telescopes between the grating and the objective must be accounted for. Thus, in the case of CGH (Fig. 9), for instance, equations (3) and (4) are modified as follows:

$$ {\textrm{FWHM}}_{{\textrm{x}}_{\textrm{BFP}}}=2\sqrt{2\ln 2}\frac{f_4}{f_3}\sqrt{\frac{2{\sigma}^2\cos {\left(\theta \right)}^2{f}_2^2}{f_1^2}+\frac{2{f}_2^2\Delta {\lambda}^2}{{d_G}^2}} $$
(5)
$$ {\textrm{FWHM}}_{{\textrm{y}}_{\textrm{BFP}}}=2\sqrt{2\ln 2}\frac{f_2}{f_1}\frac{f_4}{f_3}\sigma $$
(6)
Fig. 9
A set of 2 photographs and 2 images. A. Experimental setup for the dissection of organotypic slices. B. A 6-well plate contains the dissected organotypic slices. C. An organotypic slice contains irregular circular structures arranged diagonally. D. A shaded rectangle with 2 spiral arrangements of clusters of dots.

Organotypic slices. (a) Organization of the hood for the dissection of organotypic slices. (b) Dissected organotypic slices on PTFE membranes placed on inserts in a 6-well plate. (c) Transmitted light image of a patched cell in a hippocampal organotypic slice. (d) Expression of a nuclear targeted fluorescent protein (mRuby) following bulk infection with an AAV. The architecture of the hippocampus is maintained

Implementation of multiplexed temporally focused light shaping (MTF-LS) either with CGH or GPC, or any other kind of amplitude modulation is in general more demanding in terms of alignment and equipment than multiplexing temporally focused Gaussian beams. Care should be taken to align the beam on the two SLMs used, one for light shaping (SLM1; Fig. 8) and the other for 3D point-cloud CGH (SLM2; Fig. 8).

In MTF-LS methods, including Gaussian beams (3D-SHOT), the excitation field is defined by the properties of the SLM used for 3D point-cloud CGH (pixel size and number of pixels or the size of the SLM) and the telescope used to magnify this to the back aperture of the objective (\( \frac{f_4}{f_3} \) in our schematic). Calibration of the spot position between the SLM and the camera is achieved in an identical manner as for CGH (see Sect. 3.2) and, similarly, the spot intensity over the entire excitation field must be homogenized by calibrating the diffraction efficiency SLM2 both laterally in the image plane (xy) and axially throughout the excitation volume (z) (refer to Note 3 for further details). Depending on the light shaping method used, different calibration procedures for compensating diffraction efficiency in light intensity may be needed. For instance, in methods using SLM1 for controlling the lateral position of the spots on a plane, diffraction efficiency calibration of SLM1 is also necessary, and when the SLMs are tiled in different regions, since light diffracted from each of them is not illuminating the round back aperture of the objective in the same way, a special calibration of the intensity is needed for each zone of illumination of the objective back aperture [149]. Finally, for systems using MTF-LS with two SLMs and a single pattern projected in the center of the diffraction grating it might be useful to orient the grating close to the Littrow configuration to maximize light throughput.

3.4 Preparation of Organotypic Hippocampal Slice Cultures

While the benefits of using organotypic slice cultures for prototyping preparations for all-optical neurophysiology experiments were described in Sect. 2.2.1 of this chapter, it is important to highlight that this preparation can also be used to address many fundamental questions central to modern neuroscience.

All animal experiments must comply with national regulations. Organotypic hippocampal slices are prepared from mice (here from Janvier Labs, C57Bl6J WT) at post-natal day 8 (P8).

3.4.1 Solutions

Table 1 presents the quantities of reagents required for the solutions used during the preparation of organotypic hippocampal slices (their product references are also listed).

Table 1 Solutions for organotypic hippocampal slice cultures preparation

3.4.2 Equipment

The following equipment is necessary to prepare the organotypic slices:

  • Tissue culture hood

  • Incubator (37 °C; 5% CO2)

  • Dissection stereomicroscope

  • Tissue Chopper (McIlwain tissue chopper, Model TC752)

  • Culture plates (six-wells; Corning 3516 or Sarstedt 83.1839)

  • Millicell Cell Culture Insert (30 mm, hydrophilic Polytetrafluoroethylene (PTFE), 0.4 μm; Sigma PICM03050)

  • PTFE membranes (hydrophilic, 0.45 μm; Millipore FHLC04700)

  • Tissue culture dishes (35 mm, sterile; Corning 353001)

  • Filter paper or Whatman paper (Fisher 11392935)

  • Transfer pipettes, narrow and wide bore (plastic, disposable, sterile, e.g., Sarstedt 86.1171.001, wide bore pipettes can be prepared by cutting the tip)

  • Razor blade (two-sided)

  • Sterilized dissection tools

    • Large scissors (Fine Science Tools 14110-17)

    • Fine scissors (Fine Science Tools 15003-08)

    • Double-Ended Micro Spatula × 2 (Fine Science Tools 10091-12)

    • Curved forceps (Dumont #7, Fine Science Tools 11274-20)

    • Straight forceps (Dumont #5, Fine Science Tools 11254-20)

    • Scalpel handle and blades (#23) (Fine Science Tools 10023-00)

  • 15 mL tubes x2 (sterile, Corning 352095)

  • Micropipette 200 μl (Eppendorf, 3124000083)

  • Tips (sterile, 200 μL, Sorenson Bioscience 14220)

Always use sterile gloves and change them between each dissection.

  1. 1.

    Prepare the solutions in a sterile environment.

  2. 2.

    Put 1 mL of Opti-MEM culture medium in each well of the 6-well plates. Place a membrane insert in each well using sterile forceps. Make sure there are no air bubbles beneath the inserts.

  3. 3.

    Cut the PTFE membrane into small squares (5 × 5 mm) using a scalpel (#23) and place them in the inserts (a maximum of 5 membranes per insert is advised for ease of retrieval). Put the plates in the incubator.

  4. 4.

    Prepare 4 culture dishes (35 mm) per pup. Place some filter paper in one of them. Half fill each culture dish with the dissecting medium. Place them at 4 °C.

  5. 5.

    Place the following items under the hood (Fig. 9a):

    1. (a)

      Tissue chopper set to cut 300 μm slices.

    2. (b)

      Dissection stereomicroscope.

    3. (c)

      Sterile transfer pipettes held in 15 mL tubes.

    4. (d)

      Previously sterilized tools in ethanol 70%.

    5. (e)

      Sterile PBS.

  6. 6.

    Thoroughly wipe the microscope, tissue chopper, razor blade, and stage with ethanol 70%.

Before starting the dissection for each pup, place 4 petri dishes under the hood.

  1. 7.

    Anesthetize pups according to local regulations and decapitate using the large scissors.

  2. 8.

    Flush the head with 70% ethanol and transfer it to the first petri dish. Insert the curved forceps into the eye sockets and remove the skin.

  3. 9.

    Insert the lower part of the fine scissors into the foramen magnum and cut the skull along the midline to the front to the midpoint between the eyes.

  4. 10.

    Cut bilaterally starting from the midline towards the sides.

  5. 11.

    Gently take apart the skull and transfer the brain in the Petri dish containing the filter paper using a short and rounded spatula. Place the brain on the filter paper.

  6. 12.

    Insert the spatula between the two hemispheres and gently separate them.

  7. 13.

    Separate the cortex with the underlying hippocampus from the brainstem, midbrain, and striatum using the spatula without touching the hippocampus.

  8. 14.

    Place the cortex so that the hippocampus is exposed. Flip the hippocampus over and out using the spatula.

  9. 15.

    Repeat steps 13–14 for the hippocampus of the remaining hemisphere.

  10. 16.

    Use the wide-bore pipette to transfer the hippocampi to the stage of the tissue chopper and align them perpendicularly to the blade.

  11. 17.

    Remove any excess dissection medium from the stage to minimize any motion of the hippocampi during the slicing process.

  12. 18.

    Cut 300 μm thick slices.

  13. 19.

    Using the narrow-bore pipette, flush the slices with some dissection medium and transfer them to the 3rd petri dish.

  14. 20.

    Gently separate the slices with the pipette.

  15. 21.

    Transfer the best slices to the 4th petri dish and incubate at 4 °C (or on ice) for at least 30 min (refer to Fig. 3 from Gogolla et al. 2006 [203] for criteria of selection). Another pup can be dissected during this time.

  16. 22.

    Following a minimum of 30 min of incubation at 4 °C, retrieve the 6-well plate from the incubator and place each selected slice in the middle of each square membrane, using the narrow-bore pipette (Fig. 9b).

  17. 23.

    Remove any excess dissection medium around the slice using a 200 μL micropipette and put the plate back in the incubator immediately.

    Excess dissection medium affects gas exchange and, consequently, slice health.

  18. 24.

    After 3 days, remove all culture medium below the insert and replace it with 1 mL of fresh, warm neurobasal-A culture medium.

  19. 25.

    Replace the culture medium every 3–4 days.

3.4.3 Bulk Infection

Before infecting, it is essential to let the slices recover and adhere to the membrane for at least 3 days in the incubator following slicing. If possible, opt for AAVs as they are non-pathogenic, non-cytotoxic, and do not integrate in the host genome. Alternatively, to avoid the use of viruses, one can utilize electroporation of plasmids.

When testing a virus for the first time on organotypic slices, it is wise to test different dilutions of the virus, as different combinations of serotypes and promoters can lead to different optimal windows of expression.

  1. 1.

    Prepare the virus solutions (if necessary, dilute the virus in PBS or NaCl 0.9%) and keep them on ice.

  2. 2.

    Retrieve the 6-well plate from the incubator and put it under the hood.

  3. 3.

    Put 1 μL of virus solution on top of the slice. Make sure that the solution covers the entire slice.

  4. 4.

    Repeat the previous step for each slice to be infected.

    As the virus diffuses in the medium, be careful to infect all the slices in the same well with the same virus preparation, to avoid undesired expression.

  5. 5.

    Put the plate back into the incubator immediately and wait for a few days for expression.

3.4.4 Troubleshooting

  • The slices should flatten and become transparent after a few days in culture (Fig. 9c). Dead slices remain whitish and opaque.

  • To know: WT slices exhibit autofluorescence.

  • Depending on type of experiment to conduct, it is essential to be aware that the use of antibiotics can affect the physiology of the cells.

3.4.5 What Is Essential on the Day of Experiment?

  • Use an upright microscope because the membrane will make it hard to see the cells.

  • For optimal results, oxygenate the external solution.

  • pH should be kept around 7.4 (with HEPES or bicarbonate & bubbling).

  • Take the slice out of the incubator only once everything on the setup is ready.

  • Under optimal conditions the slices may be re-used across multiple experimental sessions.

4 Notes

  1. 1.

    Tilted illumination of the grating in relatively large angles compared to Littrow configuration is also used in order to increase the difference in optical path between the different colors diffracted [126]. In this way the temporal focusing effect is enhanced because the depth of focus where all colors arrive in phase to recreate the ultrashort pulse gets smaller. This helps as well in using temporal focusing with pulses of the range of 100s of fs.

  2. 2.

    The field of view, or more correctly field of excitation (FOE), in CGH is defined by the size of the SLM pixel magnified at the back focal plane of the objective (α) [137, 149]: \( {\textrm{FOE}}_{xy}=2\frac{\lambda {f}_{\textrm{obj}}}{\alpha } \), where \( \alpha =d\frac{f_2}{f_1} \), with d the physical SLM pixel size. Thus, the shortest is f2 or the longest is f1, the biggest the FOE at the sample. However, for optimizing the depth of focus in temporal focusing we often need to use long f2 values for increasing the linear dispersion at the back aperture of the objective. In other words, FOE and axial resolution in CGH-TF systems do not change in the same direction, and a compromise needs to be done in some cases. A particular situation is when objectives with very big apertures are used. While these objectives are commonly used to increase the accessible field of view for imaging, the magnification \( \frac{f_2}{f_1} \) that we need to use for CGH to fill the objective back aperture is ≤1 and so this does not help neither for increasing the FOE for CGH, nor for achieving the optimum axial resolution for temporal focusing. In that case the physical SLM pixel size d is critical, and it has to be chosen according to the application needs.

  3. 3.

    For a precise spot intensity calibration, diffraction efficiency in the xy plane must be characterized for different z positions, for instance, every 5–10 μm of axial displacement. However, this is a very cumbersome procedure, unless it is automatized, and it is often omitted. Nevertheless, it can be crucial and necessary for applications using volumetric excitation.

5 Outlook

All-optical neurophysiology is evolving as a useful approach in neuroscience to decode patterns of neuronal activity and understand how these patterns contribute to neural disorders, to cognitive tasks, or to specific behaviors. Important achievements in molecular biology and development of advanced optical methods are contributing to elucidating the neural code. A plethora of optogenetic constructs with variable properties in terms of excitation spectrum, kinetics, and sensitivity can be used in combination with optical methods that provide high spatial specificity and temporal precision. It is now possible to manipulate brain activity at different spatiotemporal scales, throughout large excitation volumes, and, also, to reach deep brain regions [203,204,206]. One of the latest developments in the field is a bidirectional tool, BiPOLES, based on two potent channelrhodopsins: the inhibitory GtACR2 and excitatory Chrimson [207]. Bidirectional tools have been developed almost since the advent of optogenetics [45, 208, 209] and BiPOLES builds upon twenty years of developments in opsin engineering and trafficking. As a result, both the excitatory and inhibitory opsins are efficiently trafficked to the membrane, with equal sub-cellular distributions and hence a tightly controlled ratio between excitatory and inhibitory action at specific wavelengths and membrane potentials is achieved. This means that neuronal activation and silencing can be controlled precisely and predictably in all transduced cells within a particular population. BiPOLES ought to facilitate a large number of loss- and gain-of function experiments, which are necessary for proving the necessity and sufficiency of a particular circuit for a specific disease, for precisely controlling spike timing without changing firing rates, and also the possibility of the long sought-after optical voltage clamp [210].

In terms of the next steps for optical technology development, all-optical experiments will continue to benefit from the use of lower laser powers, increased acquisition or modulation speeds (for faster acquisition rates/higher temporal precision or larger fields of observation/manipulation), higher spatial resolution and access to deeper brain regions. 2P excitation microscopy with scanners or scanless parallel illumination through spatial light modulators, though fibers or fiber bundles for endoscopic applications, with point spread function engineering for achieving mm3 excitation volumes, or particular configurations allowing mesoscopic imaging, will continue to be developed for studying neural circuits. 3P microscopy has been already used for morphological and functional imaging beyond a depth of 1 mm, but it has not yet been explored for photoactivating neurons in deep brain regions. Moreover, although multiphoton excitation approaches have advanced separately for imaging or photostimulation of neurons, their combination for all-optical manipulation has evolved at a much slower rate and is limited to a handful of laboratories. All-optical neurophysiology experiments presented so far, usually involve sophisticated photostimulation approaches using wavefront-engineering techniques, combined with standard galvanometric scanning microscopes and electrically tunable lenses for recording responses from neurons in different axial planes. Combining high-speed excitation and recordings throughout large, continuous volumes would have a great impact in the field.

The optical manipulation of neural circuits has the potential to be an extremely potent approach for understanding brain function but requires carefully chosen and calibrated tools and methodology in order to be capable of addressing the specific question under investigation. Now that all-optical manipulation of neurons has become possible more than ever in an on-off basis of cell excitation, finest control of the spatiotemporal characteristics of the excitation patterns that leads to the detection of subtle characteristics in neuronal reaction, is needed. This can help in diversifying the role of the circuit activity itself or in correlation with other circuits, and to observe how this difference may alter network performance or lead to a different behavior.