In the study of cardiac anatomy and function, imaging has become a key component of standard clinical diagnostics, as well as clinical and basic research. Here, we pick and describe a small number of the many exciting recent conceptual and technical improvements in several imaging technologies. Although these cutting-edge advances promise to greatly increase the quality of output data, each technique presents its own set of limitations which may delay their implementation in routine clinical practice. We will discuss the promises and limitations of each technique in detail, descending from macro- to micro- to nano-scales of observation, with special focus placed on possible pathways to successful integration into the clinical setting.

Cardiovascular magnetic resonance

Magnetic resonance imaging (MRI) is based upon the principle of nuclear magnetic resonance (NMR) - the physical phenomenon of absorption and emission of radiofrequency (RF) energy of certain atomic nuclei when placed in an external magnetic field (B0). The vast majority of today’s MRI concepts use the hydrogen nucleus to generate a RF signal that is detected by antennas (“coils”) positioned closely to the examined region of the body. RF pulses are used to “excite” the NMR signal (i. e. cause rotation of the magnetization vector by a certain flip angle), and spatially varying magnetic field gradients are used to localize the signal in space (i. e. spatial encoding). The interplay of signal excitation, spatial encoding and signal reception is typically referred to as pulse sequence. The variation of pulse sequences and their parameters allows for generation of different image contrasts between tissues depending on the local relaxation properties of the hydrogen atoms: after RF excitation, the net magnetization vector rotates (“precesses”) around the direction of the main field (B0), and eventually returns (“relaxes”) to its equilibrium state. The return of the net magnetization vector to equilibrium has two components: (1) return of longitudinal (i. e. parallel to B0) magnetization components to the equilibrium state parallel to B0 referred to as longitudinal or T1 relaxation, and (2) decay (“dephasing”) of transverse (i. e. perpendicular to B0) magnetization components referred to as transversal or T2 relaxation. Both T1 and T2 times are tissue-specific time constants.

Since its early development in the 1970s and 1980s, MRI has been demonstrated to be a highly versatile imaging modality in clinical and basic research. Over the last few years, the field of cardiovascular magnetic resonance (CMR) imaging has emerged as an indispensable tool in modern routine clinical practice, allowing for noninvasive and simultaneous assessment of whole heart morphology and function. CMR can accurately and reproducibly provide quantitative evaluation of both global and regional ventricular function, blood flow and tissue motion, myocardial perfusion and extent of injury [3, 71], mainly due to the unique soft tissue contrast of CMR imaging which enables excellent distinction of tissue and blood. Another major advantage of CMR—for example compared to echocardiography—is the unrestricted acquisition window allowing for visualisation of the heart in any oblique plane as well as full volumetric coverage. Furthermore, CMR imaging does not involve the use of ionizing radiation, thereby eliminating safety concerns regarding radiation exposure, such as encountered in X‑ray computed tomography (CT; see below).

CMR measurements of ventricular volumes and masses (here, mass is obtained by multiplying volume by 1.05 g/cm3 - the density of myocardial tissue [62]) are typically based on the acquisition of a stack of multislice CINE images (i. e. a series of rapidly recorded images taken at different phases of the cardiac cycle to display movements of the heart in a dynamic “cinematographic” fashion) in short-axis orientation (perpendicular to the main axis of the heart), covering the entire cardiac cycle. These images are acquired with so-called steady-state sequences during multiple breath-hold manoeuvres. The resulting images exhibit a high signal-to-noise ratio, and achieve a temporal resolution of <50 ms and spatial resolution of <1–2 mm. Time-resolved global morphological parameters (volumes, mass), as well as functional parameters (ejection fraction), can be obtained postacquisition by segmentation of blood and myocardial tissue at different time points. Today, all standard clinical scanning platforms are capable of performing such measurements, making CMR the reference standard in clinical cardiology for the assessment of left and right ventricular volume and mass [3, 72].

In order to obtain detailed information about regional cardiac function, such as tissue strain, so-called tagging methods have been developed [3]. Similar to conventional CINE images, these methods are also based on the acquisition of time-resolved images in short-axis orientation. Tagging sequences use a combination of radiofrequency pulses and magnetic field gradients to spatially modulate the tissue’s magnetization (i. e. the measured signal), which induces a grid-like pattern superimposed on the acquired images (the grid raster size is about 4–10 mm). The deformation of the grid over the cardiac cycle can then be used to derive and quantify myocardial strain [35, 84].

Direct measurements of velocities of both blood and myocardial tissue can be realized by phase-contrast MR imaging [67]. Here, the dynamic information is obtained in a pixel-wise manner that enables functional assessment at higher spatial resolution (about 1–2 mm) than tagging techniques which are limited to the grid raster size. Phase-contrast (PC) techniques rely on the linear relation between the phase of the MR signal and the velocity of moving structures. PC imaging allows for time resolved measurements of the underlying velocity pattern and can be extended to “4D flow” imaging (3 spatial velocity components plus time), an approach that is already being used to assess blood flow [16, 86]. When used to measure tissue velocity, these methods are typically referred to as tissue phase mapping (TPM). TPM enables the direct detection of regional changes in myocardial velocities and is currently used in basic research and clinical studies of left ventricular function [19, 39, 69]. The development of TPM techniques to gain biventricular quantitative information (Fig. 1a) is a part of on-going research [59, 87].

Fig. 1
figure 1

Different imaging modalities, suitable for varying spatial scales. a Cardiac magnetic resonance; biventricular tissue phase mapping data in a healthy volunteer at (left) systolic and (right) diastolic cardiac phases. Images are acquired in short-axis view at a basal location through the heart. Pixel-wise colour-encoded overlay represents long-axis velocities (vz). White vectors indicate direction of in-plane velocity components (arrow lengths are proportional to the in-plane velocity values). (Image courtesy: Dr. Marius Menza, Department of Radiology – Medical Physics, Medical Centre – University of Freiburg). b Computed tomography (CT); postprocessing of coronary CT angiography; volume rendering for anatomy visualization (left); curved planar reconstruction of the left anterior descending artery (right). c Optical mapping; colour-coded activation timing in a mouse heart in sinus rhythm, demonstrating atrial followed by ventricular activation. d Electron microscopy/tomography; human ventricular myocyte. Scale bar = 1 µm.

The degree of myocardial perfusion can be assessed by administering contrast agents which locally alter the magnetization’s T1 relaxation time, thereby affecting the intensity of the measured signal. As a consequence, the myocardial signal increases rapidly in well-perfused regions, whereas ischemic regions exhibit a measurably smaller or delayed increase [3]. Dynamic measurements of the signal evolution during and after injection of a contrast agent bolus (“first pass measurements”) allow for the assessment of absolute myocardial perfusion [23], while so-called late enhancement methods reveal sites of myocardial injury and can, for example, be used to quantify the extent of acute and chronic myocardial infarction [42]. In addition to T1-sensitive imaging, T2-weighted imaging plays an important role in the detection of myocardial oedema, for example in the course of acute myocardial infarction or myocarditis [78]. Current efforts are aimed at enabling direct measurements of the tissue-specific T1 and T2 times (“T1/T2 mapping”) to provide quantitative assessment of myocardial abnormalities. This is a substantial extension to conventional methods, which are based on qualitative changes in signal intensity only. The usefulness of T1 and T2 mapping sequences for the evaluation of myocardial function is under investigation in both basic research and clinical settings [43, 77, 78].

As MRI is intrinsically a relatively slow imaging technique, successful CMR requires strategies to compensate for cardiac and respiratory motion, which otherwise may give rise to severe imaging artefacts. Therefore, ECG triggering and respiratory gating are used to synchronize data acquisition with the cardiac and respiratory cycles, respectively. Although these techniques are readily available on modern clinical MR scanners, their effective implementation still requires specialised training. Current efforts in technical and methodical developments aim at accelerating the image acquisition process while maintaining high spatial resolution and providing full three-dimensional (3D) coverage of the heart, e. g. to assess small cardiac structures such as the coronary arteries [100]. Parallel imaging concepts, as well as more recent approaches like compressed-sensing (i. e. reconstructing signals and images from fewer measurements), are utilised to achieve substantially shorter imaging times [92]. A prominent recent example, the so-called XD GRASP method, combines data undersampling (i. e. shortening of the acquisition process via golden-angle radial sparse parallel sampling), compressed sensing and data sorting (i. e. sorting data into extra motion-state dimensions depending on respiratory and cardiac motion states) to provide artefact-free full 3D CMR data of dynamic processes [17]. Together with advances in interventional CMR [79] and interventional cardiology, e. g. development of new nonmetallic, bioresorbable vascular scaffolds [75], and the advent of dedicated postprocessing techniques for CMR image analysis [30, 82], it is expected that CMR will continue to increasingly function as a major method of investigation in preclinical, clinical and basic cardiology.

An open issue in CMR (and MR imaging in general) is the examination of patients with implanted cardiac devices such as pacemakers and defibrillators. These devices generally prohibit MR examinations because of safety issues [51, 68]. Although the introduction of MR-compatible pacemakers over the past few years has opened the way to enable MR exams in such patients, CMR imaging may still be hampered due to structural artefacts in the close proximity of these devices. The adoption of MR-compatible devices as a clinical standard will furthermore depend on other aspects such as cost, definition of clear universal clinical guidelines for implantation, and dedicated staff training and education [18, 83].

Cardiac computed tomography

Although CMR provides a wide range of structural and functional information, cardiac computed tomography can offer a more detailed morphological assessment (with spatial and temporal resolutions of 0.35 mm and 24–65 ms, respectively [52]), with superior quality of tissue differentiation and a low artefact rate. While MR uses a strong magnetic field and radio waves, CT involves acquisition of multiple X‑ray measurements at different angles to produce cross-sectional (tomographic) images. CT is already a well-established diagnostic tool for noninvasive evaluation of the coronary tree, thanks to the widespread clinical availability of multidetector-mode CT and the comparability of the final data quality to invasive coronary angiography (Fig. 1b; [95]). The utility of CT for myocardial imaging and for the planning of invasive procedures is now also being increasingly recognised, with the cardiologic and radiologic societies establishing criteria for routine clinical use of cardiac CT [1, 89].

An important recent development in cardiac CT is the possibility of calculating the fractioned flow reserve (FFR) for investigation of coronary arterial stenosis. FFR can be used during catheter examinations to identify the severity of stenosis based on the reserve of the vessel using blood pressure gradients. FFR derivation from CT data is based on a physiological model using coronary anatomy and established form-function principles, with a fluid model used to calculate flow and pressure under simulated hyperaemic conditions. While this method is already clinically available, the data cannot be readily obtained and analysed in real time due to high computing power requirements. New, simplified analysis algorithms remain commercially unavailable but they will undoubtedly improve the clinical importance and applicability of FFR measurements in the near future [7, 25].

Cardiac perfusion imaging moves the focus of CT beyond coronary imaging and provides additional valuable information on myocardial structure. The actual scan is performed both at rest and under cardiac stress (induced by intravenous application of a hyperaemic agent, such as adenosine). The differences in myocardial attenuation (of X‑ray beams within the tissue) can be assessed qualitatively—using single shot techniques—or quantitatively, using dynamic CT acquisition during the drug wash-in and wash-out periods. The postacquisition evaluations involve the use of mathematical models to calculate myocardial blood flow [33, 93]. Image quality may be enhanced by further development of dual energy scanners where two X‑ray tubes are used to produce different voltages offset at 90° within the same gantry, and photon counting CTs—where net detectors are used to count individual photos and allocate them to predetermined energy levels (instead of integrating the absorbed energy levels as is done in most current conventional detectors) [99].

The rising utility of cardiac CT is highlighted by its successful paired application in the design and postevaluation of minimally invasive cardiac procedures, such as transcatheter aortic valve implantations, left atrial appendage exclusion, pulmonary vein isolation ablations and mitral and tricuspid valve interventions [26, 64, 85].

Further recent developments in CT include the testing and implementation of dual-source CT, wide array detectors and low voltage (kV) X‑ray tube designs. These improvements have led to significantly increased temporal and spatial resolution, along with lower radiation doses, increasing the utility of CT in clinical practice [28, 54]. Dual source detectors comprise duplicated X‑ray tubes and detectors, thereby reducing the gantry rotation required for image reconstruction and improving temporal resolution. This enables coronary CT angiography across a single heart beat (high-pitch mode) utilising a fast CT-table movement examination with prospectively ECG-triggered, helical acquisition mode [24, 50]. Meanwhile, CT scanners with wide array detectors (256–320 detector rows) have concurrently been able to increase the craniocaudal coverage up to 16 cm without table movement—allowing for visualisation of the whole heart within a single gantry rotation. The reduction in table movement and required heart beats, ideally down to one cardiac cycle, minimises motion-based artefacts (in particular banding/step artefacts), ensuring homogenous contrast enhancement [10, 21].

Dual energy CT can be used for further artefact reduction, vascular plaque analysis, and improved contrast enhancement in myocardial perfusion imaging. Dual energy CT is possible through a number of different techniques (1) using two X‑ray sources and detector pairs at different tube settings, (2) single source switching between low and high tube currents or (3) using a double-layer detector capable of differentiating between low- and high-energy photons. Photon counting detectors that allow the differentiation of multiple absorption levels are also currently in development [14, 96] and promise more efficient use of dual or multienergy analysis for myocardial or cardiac perfusion imaging.

Concurrently, new X‑ray tube design allows coronary CT angiography to be performed at higher tube current and lower tube voltage settings (for now only in nonobese patients). This enables robust image quality at lower radiation doses and contrast medium volume. Adequate vascular opacification is maintained due to better absorption of the X‑ray beam at a range closer to the absorption spectrum of the contrast agent [4, 60].

Finally, iterative image reconstruction algorithms help to improve image quality by increasing the signal-to-noise ratio. The main iterative reconstruction models are based on algebraic, statistical or, more recently, model-based iterative reconstruction techniques [65].

Recent rapid advances in cardiac CT technology promise to make it a mainstay in clinical cardiology. However, further efforts are needed to improve image quality and reduce radiation doses. The clinical applications of cardiac CT for the evaluation of suspected or known cardiac and coronary diseases is expected to evolve soon, based on emerging novel technologies.

Optical mapping

CMR and cardiac CT enable noninvasive high-resolution assessment of cardiac anatomy, as well as of deformation-encoded functional parameters, such as contractility and blood flow velocity. Functional electrophysiology (EP) measurements however remain highly invasive, even if recent improvements in catheter design are making mapping and intervention (e. g. atrial ablation) easier, faster and safer. The use of optical techniques to map cardiac EP has revolutionised basic cardiovascular research since the first report of its use to measure changes in cardiac membrane potential (Fig. 1c; [80]). The approach is based on the use of fluorescent dyes that change either their absorption or emission of light in response to changes in cellular properties such as membrane potential or calcium concentration. The tissue containing the dye is illuminated with a set wavelength and the fluorescent emission recorded, typically by a camera with appropriate wavelength filters. Changes in the emitted light intensity are then used to infer changes in cellular properties allowing, for example in the case of membrane voltage recordings, to track action potential propagation and duration in single cells, cell cultures, tissue slices and both ex vivo and in situ whole hearts [48].

Increasingly, optical mapping is being applied to ex vivo assessment of human atria [27], ventricles [63] and tissue slices [40] within the frame of translational research. While currently still restricted to nonclinical use, recent advances in optical mapping techniques are enabling wider accessibility and are paving the way towards their potential use in clinical diagnosis.

Originally, photodiodes were used for the observation of optical signals; however the desire for higher speed, sensitivity and resolution led to the introduction of charge-coupled devices (CCD) and, subsequently, electron-multiplying CCDs (EMCCD) [13, 80]. These were essentially grids of light-sensitive capacitors which generate and accumulate charge in response to light. However, the spatial resolution of EMCCDs remains comparatively low (up to 1 megapixel at 26 frames per second [fps]) and the costs high. Scientific complementary metal-oxide-semiconductor (sCMOS) cameras provide much higher spatial and temporal resolution (up to 5.5 megapixel and 100 fps; when using smaller window sizes this can be increased up to 27,000 fps) at a comparatively low cost, albeit with a loss in sensitivity [13]. More recently, advances in standard, low-cost, CMOS cameras (resolution of 0.36 megapixels at 100.2 fps) has enabled their use in optical mapping [47]. Suitable cameras for optical mapping differ in cost, wavelength sensitivity, temporal and spatial resolution, and signal-to-noise ratio, so their choice is application-specific.

Limitations of optical mapping approaches include uneven dye loading, uneven illumination, photobleaching and motion artefacts induced by contraction. The effect of a number of these limitations can be reduced through the use of ratiometric dyes. These dyes are either excited with a single wavelength and their emission measured at two different wavelengths (emission ratiometry) or sequentially excited by two different wavelengths and a single emission wavelength measured (excitation ratiometry). Wavelengths are chosen such that a change in the property of interest will result in an increase in fluorescence of one wavelength and a decrease in fluorescence of the other wavelength. Dividing the two signals reduces the impact of artefacts while also enhancing the signal of interest.

Emission ratiometry requires two cameras—thus either requiring more complex optics to split the image between the cameras or complicating analysis by requiring postprocessing to correlate the images from the two cameras. Excitation ratiometry requires only a single camera with signals from each wavelength interleaved—previously this required mechanical shutters and filter wheels to switch wavelengths, introducing mechanical noise, increasing cost and limiting the available wavelength combinations. The advent of low-cost and high-power light-emitting diodes (LED) has made excitation ratiometry much easier. The flexibility of LEDs has enabled near-simultaneous measurement of multiple ratiometric dyes utilising a single camera [46]. Such an approach, in combination with the use of mirrors, has enabled panoramic measurements of both membrane voltage and intracellular calcium concentration in the Langendorff-perfused heart allowing tracking and correlation of electrical activity and calcium dynamics across the entire heart surface [49].

Ratiometry is not sufficient to correct for large motions and pharmacological excitation–contraction uncouplers typically have to be used, so that electrical signals occur without associated contraction. However, even the most specific (and therefore most commonly used) motion uncoupler, blebbistatin [44, 88], has been shown to significantly affect cardiac EP [5]. Furthermore, the very act of preventing contraction is likely to affect EP via mechanoelectric feedback mechanisms [6, 90]. The development of progressively higher spatial resolution cameras has now enabled postacquisition image processing techniques to reconstruct action potentials from contracting hearts, thereby avoiding the need for uncouplers [12, 101].

A comparatively recent development in optical mapping is the use of optogenetic probes, rather than injectable dyes. These are genetically encoded probes that can be targeted to specific cell populations or subcellular compartments [41]. This allows the kind of in situ investigations that had previously been largely limited to the in vitro setting [74]. However, optogenetic probes typically have a lower intensity and signal-to-noise ratio compared to injectable fluorescent dyes, making measurements more difficult and typically requiring high light intensities and the use of EMCCD cameras. Furthermore, finding or designing appropriate targeting sequences to ensure cell- or compartment-specific expression [38], as well as the time and cost required to produce new viruses or genetically modified animal lines, limits the current applicability of optogenetic approaches. Nevertheless, these probes, in conjunction with optogenetic actuators, promise to rapidly advance our understanding of cardiac function in vivo [34, 74].

Optogenetic probes currently remain firmly within the realm of scientific investigation and are unlikely to transition into clinical use in the very near future. However, recent publications have raised the possibility of using optical mapping clinically. It has been found that indocyanine green, a contrast agent that is approved by both the FDA and the BfArM (Federal Institute for Drugs and Medical Devices) and commonly used for measuring liver function and blood flow, is voltage-sensitive. Initial experiments have confirmed the ability to use indocyanine green to optically record action potentials in cell cultures [91] and Langendorff-perfused rabbit hearts [55]. Until now there have been no reports of the use of indocyanine green in animals in vivo, or in human tissue. Any potential clinical implementation will have to minimise the time necessary to perform the measurements and ensure sufficient access to the site of interest—a problem that might be tackled by new generations of catheters fitted with optical fibres, such as the confocal endoscope discussed in [97].

Photoacoustic imaging

Photoacoustics is an exciting new development on the border of echocardiography and optical mapping, potentially capable of overcoming the limitations described above (e.g. gaining sufficient access for optical mapping in the clinical setting). In photoacoustics, a pulsed infrared laser is used to excite injectable dyes present within the tissue. Upon excitation, the dye particles undergo a brief thermoelastic expansion and, as a result, emit soundwaves that can be detected using an ultrasound transducer, while simultaneously collecting anatomical information. So far, photoacoustics has been used predominantly for tissue type determination (e. g. fat) or to measure haemoglobin oxygen saturation [102]. However, recently developed photoacoustic voltage-sensitive dyes may offer the possibility of noninvasive measurements of membrane potential throughout the myocardium rather than just near the heart surface [103]. Voltage measurements are currently limited by their dynamics (transient responses in the order of seconds) of photoacoustics dyes, meaning they are restricted to measuring slow changes, such as those that may occur during ischaemia. However, work is proceeding on developing faster variants, and once available, we are likely to see a revolution in cardiac electrophysiology similar to that resulting from the development of echocardiography—borrowing from terminology used by the leading team developing photoacoustic dyes—we may become able to listen to voltage in the depth of the heart.

Electron microscopy and tomography

While whole organ functional and structural data is crucial for biomedical research, small scale structural information may help to explain macroscopic observations and inform interventions. For micro-scale investigations one can turn to classic histology and/or immunohistochemistry. Recent advances in technology allow high-throughput 3D rendering of tissue and whole organs, for example using external reference serial histology [9] (thereby allowing for optimal slice-to-slice alignment without large shape artefacts) or deep intact tissue imaging using 2‑photon technologies, such as depth imaging of cleared cardiac tissue samples [38]. This involves transformation of the intact tissue into an optically transparent nanoporous acrylamide-based hydrogel that yields itself to exploration using standard techniques such as antibody labelling [38, 56].

However, light-based imaging approaches are inherently limited by the physics of light diffraction, an obstacle that cannot be easily overcome. This restricts the highest point-to-point resolution to a typical value of about 250 nm, although the use of sophisticated super-resolution microscopes can reduce this to 70 nm [32]. While sufficient to visualise gross tissue architecture and overall morphology of organelles in the whole cell, and to assess the presence of specific cellular markers, light microscopy does not lend itself to the assessment of the three-dimensional (3D) intracellular nano-architecture such as the intricate inner mitochondrial membranes, or the exact modes of contact between different ultrastructural entities within and between cells.

Traditionally, intracellular nano-structures are imaged using electron-based rather than light-based approaches—most commonly by transmission electron microscopy (TEM), where the image is formed as a result of interaction between the beam of electrons and the sample (Fig. 1d; [45]). TEM, in use since the 1930s, is capable of achieving a significantly higher resolution than light microscopes, owing to the smaller de Broglie wavelength of electrons, enabling the capture of much finer detail, even as small as a single column of atoms; a staggering three orders of magnitude smaller than objects resolvable using a light microscope. However, electron-based approaches traditionally have been held back by low throughput and the laborious process required to obtain stacked or 3D images. This has now been partially addressed with the development of automated 3D approaches, allowing for acquisition of large datasets (although sample preparation still remains time-consuming). The most widely used approaches are serial block-face scanning electron microscopy (SEM) [70, 73] and electron tomography (ET) [58].

Regardless of the approach taken, one of the most persistent limitations is the inherent trade-off between resolution and sample/dataset volume. Currently, most SEM-based approaches support a maximum X–Y resolution of 5 nm [66], whereas electron tomography can achieve a resolution of less than 1 nm. However, serial SEM, in which sample planes are removed after imaging by either an ultramicrotome or a focussed ion beam, allows for a greater sample volume to be imaged and analysed. By comparison, the maximum sample thickness for ET is restricted by how far electrons can penetrate through the biological sample (typically less than 0.5 μm; for a more comprehensive review of the advantages and limitation of both methods in cardiac research see [76]). If, however, sub-nanoscopic resolution is desired, ET is superior to other techniques as it allows the user to render 3D images without axial bias and with isotropic (sub-)nanometer resolution. Furthermore, ET is nondestructive, enabling reimaging of the sample, if required. Data obtained by ET lends itself to arbitrary virtual reorientation, allowing for direct examination of structures of interest and their connections with their neighbours [2, 20, 57, 74]. Recent improvements in the field of ET include the development of novel methods of sample preparation, the growing use of cryo-electron tomography (winner of the 2017 Nobel Prize in Chemistry), and the development of correlative light and electron microscopy approaches.

Classic methods of initial sample preparation, usually based on aldehyde-based fixation followed by dehydration, are steadily being cast aside in favour of sample vitrification (creation of noncrystalline amorphous ice), allowing for more faithful preservation of molecular structures [94]. Vitrification can be achieved either by plunge-freezing or by high-pressure freezing. Application of high pressure (about 2000 kPa) inhibits the expansion of water and prevents formation of damaging crystals during freezing. Cryo-fixation is typically achieved within milliseconds and ensures simultaneous immobilization of all macromolecular complexes. High-pressure frozen samples may then be “freeze substituted” (water-for-solvent) for imaging at room temperature, or imaged in the native, hydrated, and frozen (under cryogenic conditions, <−150 °C) state by cryo-EM [11, 37]. The stack of acquired projections (for dual axis acquisition the sample is rotated by 90o between the two stacks) yields a 3D volume (a tomogram) that represents the block of vitreous ice in which the specimen was embedded.

The major advantage of cryo-ET is that it allows for observation of specimens that have not been stained (whereas standard ET preparations are stained with heavy metals, often obscuring the finer details) or fixed. The lack of externally applied heavy metals in the sample has helped to steadily improve the achievable resolution of cryo-ET, reaching near-atomic (below 5 Å) [15]. This has been further aided by the implementation of single-particle analysis and the application of sub-tomogram averaging techniques; provided that the specimen has optimal size, conformational homogeneity and symmetry (e. g. viruses, protein subunits) [22, 61]. However, the lack of staining also leads to one of the most insurmountable obstacles in cryo-ET, namely the difficulty in identifying the structures of interest within complicated cellular environments. It is notoriously difficult to unambiguously recognise specific cellular features because of the low signal-to-noise ratio of cryo-embedded specimens. One solution is to combine light-based and electron-based imaging—an approach termed correlative light and electron microscopy (CLEM) [31, 81]. In this set of techniques, a sample in which the protein/structure of interest is fluorescently tagged, is frozen under pressure, and first imaged in a light microscope equipped with a special stage to allow the sample to be kept at sub-crystallization temperatures (<−150 °C). The location of the fluorescent signal is identified and the sample is transferred to the electron microscope, where the same location is imaged at high resolution by cryo-ET, thereby providing ultrastructural information. The two datasets are then combined to enable at least rough identification of the identity of features observed in cryo-ET.

Although laborious, electron tomography has been employed in cardiac science for several years [8, 29, 36]. Cryo-ET has also been used as a diagnostic tool in the clinical setting, helping resolve the fine ultrastructure of airway epithelial cilia in primary ciliary dyskinesia patients [53], and of platelets in the context of malignant cancers [98]. The continuous advance of both imaging and image processing technologies has and will continue to have a great impact on the quality of the reconstructions; the new direct electron detectors promise higher resolution, and advanced image processing software helps to compensate for aberrations and artefacts occurring during acquisition. Owing to these advances, it seems plausible that cryo-ET will be used for routine observation of cellular features of interest within their native environment, in combination with other upcoming structural techniques, such as CLEM, providing sub-nanometer resolution and exposing previously invisible information that may aid clinical diagnosis and treatment.

Conclusions

Recent advances in imaging techniques are greatly improving our ability to characterise cardiac anatomy and function in clinical, translational and basic sciences. In doing so, we are enhancing our capability to understand and diagnose disease states. In this paper we review a subset of recent development. CMR and CT are already indispensable noninvasive tools in modern cardiac clinical routine and research. Although these methods do not capture processes at a cellular level, they are able to detect structural and functional changes at the macroscopic levels that are relevant for EP and mechanical heart function. Direct assessment of EP using optical mapping, or in future photoacoustics, remain limited to translational and basic sciences; however both are becoming more accessible. Optical mapping can now be performed more routinely, due in part to wider availability of affordable cameras and improved LED-based illumination systems. Recent developments raise the possibility of applying these methods to clinical practice for electrophysiological recordings. Lastly, nanostructural observations greatly aid diagnostics and research. Advances in the field of electron microscopy (in particular, cryo-ET and CLEM) have enabled the visualisation of individual cells in a near-native state. This minireview outlines a snapshot of part of the progress made in cardiac imaging, which continues its fast-paced development as this article goes to press.