Keywords

FormalPara What You Will Learn in This Chapter

Understand what is a single image or what a more complex multidimensional dataset represents; identify the technique used for the acquisition and read the metadata; consider the limits deriving from the imaging technique; be able to visualize and render the dataset using different software tools; apply basic image analysis workflows to get data out of images.

Present images and data analysis results in an unbiased way.

4.1 Image Acquisition and Analysis: Why they Are Needed

4.1.1 Image Relevance in Science from Biology to Astronomy

The use of the image as a vector of information is not a feature that uniquely belongs to modern science. From the first attempts made by Renaissance artists to accurately describe the human body anatomy to the representation of the lunar topography done by Galileo Galilei, the common surprising feature is the effort to render the image as representative as possible of the investigated object.

Modern science can, fortunately, rely on detectors to capture images of the reality (thanks mainly to the theoretical formulation and experiments of those “analogical” geniuses such as Galilei, Newton, and EinsteinFootnote 1).

A variety of imaging techniques, some more invasive than others, can be used to study the human anatomy or functional aspects of cellular organelles at a different scale of resolution. For example, Fig. 4.1 shows BPAE cells properly stained to characterize different cellular compartments: nucleus, cytoplasm, and mitochondria.

Fig. 4.1
figure 1

BPAE cells. (a) Nucleus (DAPI), (b) actin (AlexaFluor 488), (c) mitochondria (MitoTracker Red), (d) merge of the channels after the assignment of different color scales to the original grayscale images (ac). Scale bar: 50 μm

Therefore, the image, considered as the intensity of detected radiation, is not simply a qualitative description of an object or a biological sample, but has the potentiality to convey useful and unexpected information, often about very different scientific aspects, and so it becomes a “measurable entity.”Footnote 2

Examples of scientific applications of imaging can come from extremely different disciplines:

  • in astronomy, the radiation from the cosmos is detected to measure which wavelengths of the light spectrum are missing (absorption bands); it is then possible to determine the atomic composition of stars which are billions of kilometers distant from the Earth [1].

  • in biology, the use of fluorescence resonant energy transfer can be used to obtain the stoichiometry of interacting molecules in living cells [2], where typical dimensions are in the order of nanometers.

4.1.2 Need for Quantitative Methods Is Extended to Light Microscopy

Light microscopy allows wonderful discoveries about the shape and functions of biological samples, but beyond the descriptive power, it is necessary to apply reproducible acquisition settings and quantitative methods to analyze the resulting images. This is not only to eliminate the subjective bias of the observer but also to take into account the possible variability between different imaging sessions:

  • Experimental conditions and image acquisition parameters. The image itself is determined by sample preparation protocols and acquisition conditions that not always can be set in a reproducible way from session to session. A few examples of conditions that can change during image acquisition in microscopy are illumination level and evenness, detectors gain, optics alignment, image format (bit depth, type), etc. Figure 4.2 shows how images of the same mouse tissue can differ if acquired with (a) low, (b) intermediate, and (c) high gain.

  • Data handling, processing, and visualization settings. The image can be visualized with settings that might induce the observer to draw wrong conclusions or can be easily altered by unattentive manipulation, or even worse, by mischievous practices. In Fig. 4.2 the same image is visualized with (a) low and (d) high contrast. Increasing the contrast might render similar to the eye two images acquired with hugely different settings (in this case Fig. 4.2d and the image in Fig. 4.2c). The real intensity distributions can be comprehended by using the histogram, where the pixel values are plotted: in Fig. 4.2e the histogram of the image Fig. 4.2a shows an overall low signal and in f) the histogram of the image Fig. 4.2c shows saturation (peak at the maximum value of 255).

Fig. 4.2
figure 2

Mouse kidney section acquired and visualized with different settings. Same field of view acquired with relatively (a) low (500 V), (b) intermediate (600 V), and (c) high gain (700 V). (d) Same image contained in (a) is visualized with high contrast, showing how visualization settings can give a deceptive image similar to (c). (e) The histogram of image (d) shows the overall low signal, compared to the (f) histogram of image (c), which covers the entire dynamic range and exhibits saturation (accumulation of counts at the maximum value 255). More info in Sect. 4.2.2.7 “Digitization”

The examples reported in Fig. 4.2 highlight the need for precise and quantitative methods in light microscopy, in order to achieve a more reproducible approach at each step of the experiment, from sample preparation to image acquisition and analysis, to ensure more reliable data.

4.2 Representation of Reality: Image Acquisition

4.2.1 What Is the Difference Between a Material Object and a Representation of It?

During imaging and analysis of the resulting image-type datasets, it should never be forgotten that images are a “representation” of reality. Indeed, in the field of theoretical physics there has been an evolution of the concept of reality: from Galileo’s famous sentence “(the universe) it is written with mathematical language” [3] to the most modern interpretations, according to which “the universe we live in is itself a mathematical structure” and more importantly is computable [4].

Nevertheless, the computational power to compute all the aspects of reality is not always available and we do imaging to get a representation of the reality itself. The information about an object of interest is obtained collecting radiation from the region of space where the object sits, making use of detectors that may have different design; for example, with positron emission tomography (PET) imaging is possible to detect the radiation emitted by decaying isotopes contained in the drug administered to the patient, or with magnetic resonance imaging (MRI) neurosurgeons can obtain a scan of the brain before an operation [5].

In a way that resembles painters’ grid technique, image acquisition can be summarized as the association of a number with each cell of a grid: the well-known pixels (picture elements).

The same region of space where our sample sits is hit by a number of different radiations (i.e., cosmic gamma rays, solar neutrinos, thermal radiation emitted by any warm body, visible and ultraviolet light from the sun rays). A curious example is the application of particle physics to archaeology, in the search for secret pyramid rooms with the measurement of muon (cosmic particles) flux [6].

So what simplifies the reality of a beautiful landscape painting or a “nice” cell image? The blindness of the used detector to other forms of signal: human eyes of a genius painter do not get disturbed by neutrinos and microscope cameras are “shielded” by filters to read only a certain range of the visible light spectrum.

Depending on the imaging technique selected we can capture just one aspect of our sample, but not obtain a complete description of it. This is a warning about the care we should have when we state that a sample has a certain feature (healthy versus diseased, higher versus lower protein concentration, etc.). The attributes that we tend to “easily” assign are relative to the state of the imaging system, acquisition parameters, visualization settings, and of course subject to the physical measurement variability.

Detectors used in light microscopy count photons (discrete light units) coming from the sample environment and have a sensitivity usually limited over a certain range of wavelengths, which for the most common applications is ~400–800 nm (Fig. 4.3).

Fig. 4.3
figure 3

Light wavelength and photon energy. (a) Planck equation: photon energy depends on light wavelength. (b) Electromagnetic spectrum region used in light microscopy. Commonly used cameras exclude infrared (IR) radiation above ~700 nm, to acquire picture similar to what human eye sees. The figure shows how undetected light carries useful information: blood vessels visibility of a human arm imaged in a similar pose (c) with and (d) without IR filter

4.2.2 Light Microscopy Key Concepts

4.2.2.1 What Is Light Microscopy?

Light microscopy is the study of small objects making use of the visible part (VIS) of the electromagnetic spectrum (Fig. 4.3b). As the technology improves and more fluorescent tools become available, the area of applications of light microscopy is expanding toward the ultraviolet (UV) and infrared (IR) regions [7] of the electromagnetic spectrum.Footnote 3

4.2.2.2 Optical Resolution

The observation of objects under any lens or complex optical system is often seen with wonder for the aspect of magnification (the ratio between the size of the object obtained in the image plane and the real size of the object in the sample plane [8]), although magnification does not result in informative images if is not obtained with correction of the optical aberrations (like the chromatic and spherical) and enough resolving power. Therefore, an object can be seen as magnified, but the image could be still blurred and containing aberrations.

Every point-like source of light imaged through a microscope results in an image or structure (in 3D) that is “distorted” by the lens: this blob is referred to as point-spread function (PSF) (Fig. 4.4a–c). The size of PSF influences the resolving power, which is defined as the smallest distance at which two objects can be separated, i.e., identified as distinct [9].

Fig. 4.4
figure 4

Optical resolution in fluorescence microscopy. The point-spread function (PSF) describes how a point-like object is reconstructed through a microscope. The obtainable resolution depends on the wavelength of the light and the numerical aperture (N.A.) of the lens. Sub-diffraction limit fluorescent beads (0.1 um size) are shown as (a) 3D reconstruction, (b) 2D slice middle plane (a bead is annotated with a white square). (c) Bead orthogonally resliced. (d) BPAE cells described in Fig. 4.1 acquired with a 20×/0.8 N.A. objective. (e) Detail cropped from the 20×/0.8 N.A. image. (f) Same field of view of (e) acquired with a 63×/1.4 N.A., where better details of mitochondria are visible; scale bar: 5 um. (g) Abbe law for optical resolution

The equation that describes the resolution, the famous “Abbe law” (Fig. 4.4g) [10], states that the resolution improves by shortening the wavelength of the used light and increasing the numerical aperture (N.A.) of the lens. Figure 4.4d, e show cell organelle structures (red) that can be better resolved by using an objective with higher N.A. (Fig. 4.4f). Concisely, the lens N.A. is more relevant than its magnification.

4.2.2.3 Microscope Elements

Any light microscope, as well as photographic cameras, relies on a source of light to illuminate the sample; the source can be the simple brightfield lamp to illuminate a macroscopic object under a stereomicroscope, or a fluorescence illuminator to specifically excite the green fluorescent protein expressed by a cell line. The illumination light is conveyed by optics, which include optical fibers that allow the light to travel flexibly and at long distance on the optical table with reduced loss, lenses to spread or focus the light beam, dichroic mirrors to reflect or transmit light according to the wavelength, prisms to separate different wavelengths, and crystals which respond to the applied electrical tension with variable optical properties. One of the most common microscope configurations is epifluorescence, where the objective has the dual role of focusing the illumination light that arrives at its back aperture and collect the fluorescence emitted by the sample.Footnote 4 Fluorescence is then sent back toward the detectors, eventually spatially filtered with a pinhole (confocal microscopes; see Sect. 4.2.3) to get optical sectioning, and then spectrally filtered in order to read signals in different channels.

The light falling on a target can be absorbed, reflected, or scattered by any substance along its path [11, 12], while a fraction of it travels through the sample and can be detected on the other side. These processes are the basis of fluorescence and transmitted light detection that are described in Sects. 4.2.2.4 and 4.2.2.5.

4.2.2.4 Fluorescence

The first and principal use of fluorescence is the measurement of the intensity and location, after the collection of signal in a “reflected geometry”: the excitation light passes through a dichroic mirror (Fig. 4.5a.1) and after being focused by the objective hits the sample (Fig. 4.5a.2). Depending on the light wavelength and the sample composition, the light can pass through (Fig. 4.5a.3), be scattered (Fig. 4.5a.4), or eventually excite fluorophores that emit light at a longer wavelength in multiple directions (Fig. 4.5a.5). A reduced amount of fluorescence light traveling backward can be collected with the same objective (Fig. 4.5a.6), separated from the excitation light using the dichroic mirror and sent to the detector (Fig. 4.5a.7).

Fig. 4.5
figure 5

Light path for transmitted light and fluorescence microscopy. (a) Excitation light (dotted blue) passes through the (1) dichroic mirror (dichromatic beam splitter) and (2) is focused on the sample by the objective. Depending on light wavelength, sample type, and thickness, some light can pass through and be observed as (3) transmitted or (4) scattered light (blue). Fluorescent molecules in the samples can absorb some excitation light and emit part of the energy as fluorescence (green) (5) in multiple directions. (6) Fluorescence traveling backward can be collected with the objective and (7) sent to the detector by the dichroic mirror. (b) Light of any wavelength at which the fluorophore absorption spectrum is nonzero (dashed blue) can be converted into fluorescence emission (dashed green), characterized by longer wavelengths (Stokes shift) and intensity dependent on the wavelength of the excitation light. The example is relative to the Alexa Fluor 488: by exciting with monochromatic light at 488 nm (cyan line), the emission intensity is lower (green curve) than the maximum obtainable hitting the absorption peak. If fluorescence is read between 520 and 550 nm (black line), the detected signal (lime area) is only about 38% of the total (area under the dashed green curve)

Absorbed light can either be dissipated as thermal vibrations, induce chemical changes, or eventually cause electronic energetic transitions. Different molecular mechanisms induce energy losses which bring the electron from a higher to the lowest energy level of the excited state: the starting point of fluorescence decay. Therefore, only a part of the incoming photon energy is converted into fluorescence [13]. Due to the inverse proportionality between energy and wavelength of the light (Fig. 4.3a), the fluorescence photons possess a longer wavelength than the excitation ones (“Stokes shift”) (Fig. 4.5b).

The absorption and emission spectra determine the amount of signal that can be obtained by fluorophores in the sample, depending on the choice of illumination and detection filter sets (Fig. 4.5b).

In samples where multiple fluorophores are present, the cross talk of signals can constitute a problem.

To overcome this mixing of signals, it is fundamental to use specific excitation wavelengths, limited detection ranges for each channel, and a sequential acquisition (see experiment design in Sect. 4.2.4).

Fluorescence can additionally be characterized by the lifetime of the excited state (property used in a technique called FLIM), the polarization of the emission (used to study molecular rotational time and complex formation), resonant energy transfer from a donor fluorophore to an acceptor (FRET), and other properties exploited to answer different biological questions [2, 14, 15].

4.2.2.5 Transmitted Light

When a sample is studied using the absorbed light, a transmitted light path can be adopted by positioning the illumination sourceFootnote 5 on the opposite side of the sample, with respect to the position of the objective (in Fig. 4.5a light would travel in the opposite direction, from the point 3 to 7).

Figure 4.6 shows how the same tissue slice appears using different approaches that employ both the transmitted light path and a “reflected” geometry (epifluorescence).

Fig. 4.6
figure 6

Same sample imaged with transmitted light and fluorescence. Tissue slice imaged on a wide-field microscope with (a) brightfield (transmitted light, color camera), (b) phase contrast (transmitted light, monochrome camera), and (c) merge of phase contrast and fluorescence channels (monochrome camera) of (d) DAPI, (e) AF488 WGA, and (f) AF568 phalloidin. Scale bar is 10 um for both (a, c)

The transmitted light path is used for simple brightfield illumination acquired with a color camera (similar to the eye vision, if an equivalent magnification could be achievable) (Fig. 4.6a), observation of immuno-histochemistry staining (dyes like hematoxylin and eosin), or with contrast techniques such as phase contrast (Fig. 4.6b) or DIC [16].

The “reflected” geometry is instead used to detect the fluorescence in different channels (the blue, green, and red regions of the visible light spectrum) (Fig. 4.6d–f).

4.2.2.6 Detectors: Cameras and PMT/Hybrid

The detectors used in microscopy can be divided into two main classes: cameras and PMTs.

  • Cameras are used on systems where the signal is expected to be emitted in a short period of time from the entire field of view, mainly for wide-field microscopy (epifluorescence, DIC, phase contrast, stereo), selective illumination systems (light-sheet, TIRF), localization microscopy (e.g., STORM), and spinning disk confocal (where illumination of the sample and light rejection is operated by multiple microlenses/pinholes).

    The sensor, divided into a grid of physical pixels, is manufactured with chemical doping of a silicon chip, to allow the formation of hole–electron pairs every time that some light with a specific wavelength illuminates the device surface.

    Once simultaneously excited by the light, the pixels release photoelectrons in the conduction band. The charge is then read, eventually amplified accordingly to a chosen gain, and converted into a digital signal. Relevant parameters for the signal to noise are the exposure time and the binning factor [17, 18].

  • PMT (photo-multiplier tube) detectors are built to have a single detection area and are instead used on laser scanning confocal microscopes. The original design includes a photocathode plate that emits electrons when hit by light; these charged particles are then accelerated under an applied voltage (variable changing the gain) so that they can hit secondary plates (dynodes), which in turn emit more electrons. This results in a multiplication of charges, which are collected by an anode at the end of the tube, and the electrical current is converted to a digital value. Improved versions of PMT detectors are using a combination of design and materials to increase the quantum yield (ratio between the detected and total received light). Examples of improved PMTs are GaAs, GaAsP, and hybrid detectors; each type results in better performance in specific regions of the spectrum [18, 19].

4.2.2.7 Digitization

The digitization of an optical image (Fig. 4.7a) is obtained by the analog to digital converter (ADC) of the detector, which converts the fluorescence or transmitted light signal from electrical current into a digital pixel intensity (Fig. 4.7b). The intensity values are assigned over an interval of possible measurable signal called “dynamic range” (Fig. 4.7c). The storage of each pixel information requires an adequate number of bits (“bit depth”) to render the digital data format capable of representing all the divisions of the dynamic range (Fig. 4.7d). The image bit depth depends on the type of detector and/or the software settings used during the acquisition. A common choice is a digitization at 8-bit, corresponding to 28 = 256 different intensity values (from 0 to 255), 12-bit (212 = 4096 total values), or 16-bit (216 = 65,536 total values). The choice of a higher bit-depth digitization makes available more possible intensity values, improving the capability of comparison among very similar samples. The downside of high bit-depth datasets is in the image handling, given the larger storage space and computational resources required, and the limitations of most software algorithms designed around the 8-bit format.

Fig. 4.7
figure 7

Images and numbers. HeLa cell nucleus stained with (a) DAPI. A small region of the image (red square) is inspected to show (b) the pixel values (sample has been acquired on a 12-bit monochrome camera). (c) Image histogram. (d) Pixel intensities range obtained with different “bit depth”: 2-bit, 4-bit, 8-bit, 12-bit, and 16-bit

Images acquired with a color camera are digitized assigning a triplet of values per each pixel, considered the red, green, and blue (RGB) channels, usually each varying over an 8-bit range, to obtain all the other colors by additive RGB composition.

4.2.3 Examples of Microscope Systems: Simplified Light Path for Wide-Field and Confocal Laser Scanning Microscopy

Every microscope system needs a source of light to illuminate the target sample, lens to convey the light appropriately toward and from the sample area, and detectors to read the signals.

The choice and arrangement of optical components diversify the systems on the basis of their capability to image different samples. Here are described two different types of commonly adopted microscopes, with a very essential representation of their components (actual microscopes are more complex and technological refinements often add more lenses, filters, and devices along the optical path, in addition to an expensive aberration-corrected objective). Figure 4.8a shows the cartoon of a cell and some of its compartments that are usually investigated in life sciences (nucleus, cytoplasm, organelles). Beyond the resolution, one of the most addressed aspects of light microscopy is the optical sectioning, which consists in the acquisition of information coming only from the focal plane or from a limited section of the whole sample, without the need to physically slice it. Figure 4.8b, c represents the essential microscope body and describes the salient parts of the most diffused microscope configurations: wide-field and confocal microscope.

Fig. 4.8
figure 8

Comparison of wide-field and confocal imaging modalities, and optical sectioning. (a) A cell consist of multiple compartments of different sizes distributed across the cell body. The image acquired on any microscope system, for a sample thicker than the depth of focus of the lens, contains light arriving both from the focal plane and out of it. (b) A wide-field microscope illuminates the sample extensively across the Z-axis; the fluorescence received from the sample derives from both the focal plane and the out-of-focus regions. The resulting image does not allow us to attribute correctly the light to the originating Z layer. (c) Differently from a wide-field, a confocal microscope is equipped with a pinhole that operates the rejection of out-of-focus light, yielding the optical sectioning. (d) The pinhole size influences the amount of signal that is received due to the level of light rejection. The plot represents the mean fluorescence intensity (MFI) measured as a function of pinhole size in (e) a mouse kidney tissue section acquired on a confocal microscope, at different pinhole sizes. A region of interest (ROI in yellow) has been measured in all the images to obtain the plot in (d)

A wide-field microscope (Fig. 4.8b) allows us to illuminate and observe a sample along the entire Z-axis, given that light can penetrate through it enough to either excite fluorophores (fluorescence detection) or simply pass through and be detected on the other side (transmitted light detection). As an illuminator, it often includes gas-filled or filament light bulbs, otherwise specific LED sets; the choice of the illumination source should ensure enough excitation power for the sample at specific wavelengths (illuminators show different spectral profile of the power curveFootnote 6). The image can then be detected with a camera.

A confocal microscope (Fig. 4.8c) uses lasers to illuminate the sample in a subregion of the field of view (FOV). This allows the use of monochromatic light to specifically excite the fluorophore of interest. The most widely adopted confocals are laser scanning systems, which means that the laser spot is moved by a scanner along the FOV to serially illuminate points and contextually read the signal along lines in XY directions. More correctly these points should be thought as spots, since the size of the laser beam is not infinitesimally small in XY and the excitation occurs also along the Z-axis. The higher acquisition time due to the use of a serial illumination/read-out is compensated by the optical sectioning obtained with the use of a pinhole. The role of the pinhole in detection is to allow the passage of light originated at the focal plane while excluding the out-of-focus one. This light rejection yields the optical sectioning.

In most of the modern confocals, the pinhole size can be controlled and optimized for the specific objective, in order to get enough optical sectioning while still reading a sufficient signal. In general, the light rejection favors the optical sectioning, but reduces the signal read from the sample (Fig. 4.8d, e).

4.2.4 Experiment Design

When a sample includes multiple fluorophores, the design of a multichannel acquisition should  be optimized to answer the salient questions posed in the experiment. Unfortunately, in microscopy it is not possible to have a combination of settings that allow the simultaneous optimization of different aspects such as the signal-to-noise ratio, acquisition speed, spatial resolution, and sample viability.

Therefore, the acquisition parameters are chosen to address possibly one aspect at the time, while often worsening the others.

For example, a possible experiment might be imaging fixed cells to determine the fine structure of a specific organelle type, in the attempt to optimize the aspect of spatial resolution. In this case the choice could be made among some super-resolution techniques, such as the photo-activation localization microscopy (PALM) [20]. However, PALM requires particular sample preparation protocols and long acquisition time compared to other techniques.

Conversely, in a live cell imaging experiment the acquisition speed is a relevant parameter while a suboptimal resolution can be accepted; then the use of a spinning disk confocal is more suitable [21].

When designing an experiment, in order to get sufficiently good data sets that will facilitate the image analysis, it is fundamental to consider a variety of aspects such as optical resolution and digital sampling, signal to noise, acquisition speed, photobleaching, sample viability, and cross talk of signals.

Table 4.1 contains an excerpt of parameters used to acquire the image in Fig. 4.4d, as an example of a multichannel experiment aimed to obtain a good signal to noise and an optimal spectral separation on a confocal system.

Table 4.1 Selected acquisition settings used to capture the image in Fig. 4.4d

4.2.5 Questions and Answers

Questions

  • 4.2.5.1 Can I obtain all the properties of an apple simply looking at its picture?

  • 4.2.5.2 How the image of a sample (and the scientific content of it) is influenced by the acquisition technique and image handling?

Answers

  • 4.2.5.1 No, the image of a sample is obtainable by the detection of some radiation. Light microscopy uses the visible part of the electromagnetic spectrum to create images that are a “representation” of the reality: most of the properties of the objects are ignored and cannot be computed.

  • 4.2.5.2 The information contained in an image depends on the adopted imaging technique and acquisition settings. If the technique is not appropriate or the settings are not well tuned, any analysis done on the dataset is subject to failure.

4.3 Image Visualization Methods

Datasets obtained through different imaging modalities need to be inspected before starting any analysis. The purpose is to get an overall idea of the dimensionality (FOV size, channels, slices, frames, positions). Additionally, any spark of intuition about the experimental results can start by visual inspection of the acquired images. It is therefore fundamental to adopt visualization methods that are not prone to error and fairly render the scientific information carried by the data, along with a robust quantitative analysis.

If no reference to the acquisition parameters is available, the first step is to identify the file format and seek for a software able to open the dataset. In some cases, the microscope system saves the images in file formats that are interpretable only by proprietary software. Fortunately, this has become a rarer occurrence, mainly thanks to the efforts of the “Bio-formats” project, which aims to make readable all the formats of image-type data [22]. The “Bio-formats” plugin is available in the most commonly adopted open-source software such as ImageJ [23], FIJI [24], CellProfiler [25], Icy [26], or commercial ones. When the dataset is opened in the correct way, its dimensionality (X, Y, Z, positions, time points, etc.) can be easily assigned and metadata are available for inspection. Metadata are additional data, attached to images, which contain useful info regarding the acquisition parameters, can be checked to establish whether the samples have been acquired in the same way, and additionally support data analysis, especially if they are presented in an open and standardized format [22]. An example of metadata extraction is reported in Table 4.1.

A single image in grayscale can be visualized with a look-up table (LUT), a map between pixel values and different hues of a color (e.g. green, red, etc., as in Fig. 4.6d–f) or a combination of colors (to highlight some features as intensity saturation or outstanding structures, as in Fig. 4.2c).

Multichannel images are composed of individual grayscale ones. Assigning the LUT to each channel makes then possible the visualization of the merge (as in Fig. 4.6c).

Z-stack or timelapse experiments can be shown in a montage (as in Fig. 4.8e), to compare slices/frames.

A relevant issue is the choice of visualization settings, which can be hugely changed until the point where it is no longer possible to fairly compare images. To be safe, the main references remain the pixel intensity, the adoption of the same range for brightness and contrast (B&C),Footnote 7 and metadata inspection to evaluate the acquisition conditions.

The following protocols require the use of different software and describe how to run commands either in the graphical user interface (GUI) or in the script editor. Conventionally, the name of a software is in italic and bold (e.g. CellProfiler is a software), while the menu structure of the commands to be executed is in “italic” (a file can be opened clicking in succession “File/Open”). Lines of code are instead reported using another font type: to show the value of the variable called “pixelSize” the command print(pixelSize) should be used.

Almost all the operations described as image visualization (Sect. 4.3), analysis (Sect. 4.4), or data presentation (Sect. 4.5) can be equivalently executed by using different approaches, ranging from an exclusively visual in the GUI, to a strictly code-only one in the script editor. Notably, each presented protocol could be executed in a specific software only, to pursue the easiest approach. However, a comparison of different methods to perform the same operations can highlight what are the advantages of a GUI or the flexibility of a script. The levels of difficulty are defined as: “basic”, “intermediate”, “advanced”. The presented protocols are available in the following repository: https://github.com/RoccoDAnt/Basic-digital-imaging_protocols.

Protocol 4.3.A: Visualization in ImageJ/FIJI—Level: Basic

FIJI [24] is a distribution of ImageJ [23] software: it includes several useful plugins for bioimage analysis and supports further developments of Java libraries for image analysis.

  1. 1.

    Download FIJI software through the website: https://fiji.sc/.

  2. 2.

    Unzip and open the folder: launch ImageJ/FIJI.Footnote 8

  3. 3.

    ImageJ allows us to open images/datasets through different methods, some of them are:

    1. (i)

      Drag and drop on the software bar.

    2. (ii)

      Open specific format or sequence of images with “File/Import/”.

    3. (iii)

      Import bioimage datasets through “Bio-Formats” plugin: “Plugins/Bio-Formats/Bio-Formats importer”.

The acquisition software can be set to attach metadata to the saved images; this info can simplify the import of the dataset, making no longer necessary specifications such as the pixel size, number of channels, and Z-stack slices.

Use of method 3 (i) can result in loss of some metadata, while method 3 (iii) is usually successful in the recognition of the acquisition metadata (thanks to the work of the Bio-formats developers [22] and the microscopy community, which constantly demands the inclusion of more data formats).

  1. 4.

    The properties of the image are visible in different places:

    1. (i)

      The image frame contains usually the title and a text area below specifying the number of pixels, the bit depth, and the size of the file; for example, image 1.czi and 1024x1024; 8-bit; 3 MB.

    2. (ii)

      The command “Image/Properties…” shows the dimensions (channels, slices, frames) and the voxel size (pixel width/height and depth).

    3. (iii)

      Metadata is visible with “Image/Show Info…”

  2. 5.

    The multidimensional dataset (obtained through 3) iii) with the option “View stack with: Hyperstack”) can be browsed using the sliders in the bottom part of the image window, and the merge of channels can be seen with “Image/Color/Channels Tool…/Composite.”

  3. 6.

    The brightness and contrast of the single channels can be changed individually, moving the “C” slider in different position and using “Image/Adjust/Brightness/Contrast….”

  4. 7.

    Pixel values of the image can be obtained with the tool available in the main bar “Pixel Inspection Tool.”

Protocol 4.3.B: Import Images in CellProfiler—Level: Basic (Propaedeutical for 4.4.B)

CellProfiler is a software for batch image analysis [25]. It has a GUI that allows us to build an image analysis pipeline in a really easy way, by adding modules to be executed in series. Each module performs one operation, such as the identification of nuclei (“IdentifyPrimaryObjects”) according to the size and the fluorescence intensity or other measurements of the detected cells (for instance, “MeasureObjectIntensity” will produce statistics about the average intensity, standard deviation, etc.). The pipeline efficacy can be checked through each module step by step in “Test Mode,” with the option of enabling/disabling and hiding the results of a particular module.

This first protocol includes two modules to import images and visualize them, in order to show how the pipeline can be built using the visual inspection on test images. The aim is to assess which are the parameters to use in the analysis (e.g. intensity threshold and size range).

  1. 1.

    Download CellProfiler software (version 3.1.9) through the website: https://cellprofiler.org/.

  2. 2.

    Install and launch CellProfiler.

  3. 3.

    CellProfiler has two alternative methods to set up the import of images:

    1. (i)

      The legacy input module “LoadImages,” which can be added with “Edit/Add module/File Processing/LoadImages,” or simply clicking on the “+” symbol in the bottom left part of the GUI. This opens the window “Add modules” that lists all the available operations that can be included in the pipeline (equivalently called “project”). The “input” folder where the “LoadImages” module will find the picture, as well as the “output” folder where we want to save the results, can both be specified by clicking “View output settings” (bottom left of the GUI).

    2. (ii)

      Use the default method called “Images”, where special filter can be set to choose specific substrings contained in the filename: for example, to only import the image of the DAPI from the “input” folder containing also a GFP channel, the string “DAPI” should be specified.

  4. 4.

    For the simple inspection of the two channels, the option 3 (i) can be used, adding the individual pictures called DAPI and GFP in the “LoadImages” tab. The module “GrayToColor” allows us to merge multiple channels into an RGB image. Choosing “Start Test Mode” allows the execution of one step at a time and see the results. Once executed “GrayToColor,” a window containing also the original image can be used to inspect the pixel intensities by moving the pointer (values are shown in the bottom part of the frame).

  5. 5.

    With right click it is possible to call “Show image histogram.”

Protocol 4.3.C: Use Coding to Open/Visualize Image—Level: Intermediate (Propaedeutical for 4.4.C)

Few lines of code in python to import and visualize a single image.

Python is a really flexible programming language and there are numerous packages supporting image analysis. To execute a python script, a development environment called JupyterLab (https://jupyter.org/) can be used. It presents a really clean interface that includes a file browser and visualization of running terminals and kernels, together with a support for autocompletion and syntax highlighting in the text editor. Code is organized inside this development environment in the form of notebooks, which are documents composed of different independent cells, containing pure text, code, or markdownFootnote 9 text.

JupyterLab can be installed and launched within Anaconda Navigator (see below), in order to make the management of packages and environments much easier. The 3D rendering and orthogonal views of an HREMFootnote 10 dataset, obtained with napari [27], are shown in Fig. 4.9.

Fig. 4.9
figure 9

3D rendering and reslicing in napari. A mouse embryo imaged with high-resolution episcopic microscopy (HREM) is visualized with napari software as (a) 3D rendering, (b) XY slice view, (c) YZ reslicing. The use of a few lines of code for simple image visualization is worth to simplify operations such as reslicing and 3D rendering. Additionally, napari supports the use of different layers, allows manual annotations, and includes an iPython terminal for integrated image analysis of the dataset. Annotations and results of segmentation can be added as new layers

  1. 1.

    Download Anaconda Navigator software through the website: https://www.anaconda.com/products/individual .

  2. 2.

    Install and launch Anaconda Navigator.

  3. 3.

    Install and launch JupyterLab.

  4. 4.

    “File/New/Notebook” choosing “Python 3” for the new kernel. This is a new “.ipynb” file that we can edit.

  5. 5.

    The usefulness of notebooks consists in the possibility to run chunks of code in independent cells (“Ctrl+Enter”). A simple test can be run “17 + 13” in a cell. The protocol code is in the Table 4.2.

  6. 6.

    It is advisable to use the terminal Anaconda Prompt to create a specific environment for napari (simply run the command contained in cell 1).

  7. 7.

    Activate the new environment and install napari (copy and run also the second cell in Anaconda Prompt).

  8. 8.

    Choose the newly created environment napari-env in the GUI Anaconda Navigator, and restart JupyterLab. The command in the third cell and the following ones have to be run inside a new “.ipynb” notebook. Cell 3 is the command to choose the visualization library.

  9. 9.

    The fourth cell contains 5 lines executing the following:

    1. (i)

      Import the method “io” from the library “skimage” to read images.

    2. (ii)

      Import the embryo dataset (or any other local image z-stack) and store it in a variable called “myImage.”

    3. (iii)

      Import napari package.

    4. (iv)

      Create a viewer to render in 3D.

    5. (v)

      Add the dataset “myImage” to the viewer.

Table 4.2 Python code to visualize a dataset with napari. The code is divided into cells that have to be run in the terminal called Anaconda Prompt (cells 1 and 2) and cells that can be run in a notebook (.ipynb), inside the JupyterLab editor. Cell 1) A special environment can be created with Anaconda Prompt to host napari software. Cell 2) The environment has to be activated before the installation. Cell 3) napari utilizes “QT” for the graphical user interface; this and the following cells have to be run in JupyterLab. Cell 4) The method “io” is imported from “skimage” library, to read the “Mouse_embryo_HREM.tif” dataset; finally, a viewer to render in 3D is created and the image stack is added as a layer, ready to be visualized, annotated or further processed (see Fig. 4.9)

4.4 Image Analysis

Data acquired on any microscopy platform often contain a level of information that is not comprehensible by simple visual inspection. Human perception is additionally subject to bias due to its variability among subjects (different light sensitivity, color blindness, or eye disorders). Furthermore, it depends on color representation (human eye has developed around peak emission of solar light), visualization settings (B&C, gamma, screen brightness, etc.), and personal beliefs (agreement with previous data or published literature). It is therefore a good practice, and nowadays a quite non-dismissible need, to sustain hypotheses regarding data interpretation with a reliable image analysis.

Image analysis makes use of a variety of software tools identifiable as components, which can be combined to build the entire workflow. Frequently, the use of a single software to complete the whole process is not possible and it turns advantageous to become confident with different approaches and tools, spanning from GUI-based ones to scripting [28].

Well-defined algorithms allow to analyze datasets in a robust way with an enormous advantage in automatizing the image analysis workflow. Furthermore, the development of machine and deep learning makes possible the extraction of scientific information, otherwise inaccessible with other methods. Additionally, the classification and processing of humongous volumes of data, which would be extremely time-consuming for human operators, can be executed in a much shorter time [29].

Despite these recent developments, the limits of image analysis should not be forgotten: image analysis cannot “show” what is not contained in the data, often because of the inappropriate biological sample preparation or image acquisition. Even more, the scientific hypotheses may simply be wrong.

Data should be interrogated to obtain information to validate hypotheses, instead of being forced to sustain biased positions developed only with the visual inspection.

To promote the knowledge of image analysis, a wise use of bioimage analysis tools and the mutual interchange between biology and computer science, initiatives such as NEUBIAS training schools and community meetings have been promoted in the last few years.Footnote 11 The performances and results obtained by the use of different algorithms can be compared with benchmarking, whose complete approach is available through the BIAFLOWS project: a platform to deploy and fairly compare image analysis workflows [30].

One of the main goals of image analysis is the object segmentation, which consists of the identification of structures of interest such as cell nuclei, organelles, etc. Common workflows are aimed at operations that include simple cell counting in an FOV, intensity measurements (like MFI or value distribution), size, and shape determination (occupied area, elongation, convexity, etc.)

The protocols 4.4.A-B-C show how those tasks can be run with different software tools.

4.4.1 Questions and Answers

Questions

  • 4.4.1.1 The results of image analysis are susceptible to the choices I make. Is this science? For example, let us talk about something that should be simple like setting a threshold.

  • 4.4.1.2 Do I need all the possible measurements or which ones to choose?

Answers

  • 4.4.1.1 Image analysis results are definitely influenced by the choice of parameters we use in the workflow; the threshold is a clear example of how choosing a too low value might result in undersegmentation (e.g., more cells identified as a single clump), while setting it too high might break the objects into multiple ones and cut part of them. For the benefit of science it is important to use reliable algorithms, properly selected with benchmarking.

  • 4.4.1.2 Additional measurements can help to better discriminate the different samples (such as positive control and treated cells). However, starting with high dimensionality, as first approach, can slow down the development of the workflow; in addition, multiple features might be correlated. The suggestion is to start with simple operations, such as object counting, MFI and diameter measurements, and check of positiveness in the different fluorescence channels.

Protocol 4.4.A: Measure Areas and Count Objects in ImageJ/FIJI—Level: Basic

The protocol uses a fluorescence image of cell nuclei (“HeLa-20Xdry-DAPI_300ms.tif”) and shows how to apply a threshold and get a binary image, to count and measure cell nuclei.

  1. 1.

    Open the image with FIJI (protocol 4.3.A)

  2. 2.

    Check if pixel size is already calibrated: “Image/Properties…”; use the image specific pixel size (1.95 um in this case).

  3. 3.

    Duplicate the current image; the copy will be used to apply the threshold: “Image/Duplicate…”

  4. 4.

    Select the copy and apply a threshold: “Image/Adjust/Threshold…”. Can choose Min = 250. Click on “Apply”. The one obtained is a binary image: pixels have value 0 for background, 255 for foreground (objects of interest).

  5. 5.

    Segmentation can be improved by splitting touching nuclei: “Process/Binary/Watershed.”

  6. 6.

    The measurements can be chosen with: “Analyze/Set Measurements…”. Choose: “Area,” “mean gray value,” and “shape descriptors.” Select the original fluorescence image name in “Redirect to.”

  7. 7.

    Identification of individual nuclei is run calling “Analyze/Analyze Particles…” on the binary image. The measurements will instead be redirected to the original image. In the panel that pops up choose the options: “Display results”, “Add to Manager”, “Exclude on edges”, “Include holes”.

  8. 8.

    The measurements are displayed in the window “Results”, the identified regions in the “ROI Manager”. It is possible to get the distribution of every column of the “Results” table: “Results/Distribution… ”.

The protocol should lead to the content of Fig. 4.10.

Fig. 4.10
figure 10

Nuclei detection and measurements with FIJI (a) Original image of cell nuclei. (b) Applied threshold on an image copy originates a binary image. (c) Object detected by the “Analyze Particles…” command. (d) Measurements are shown in “Results” window. (e) Distribution of MFI obtained from “Results” table. (f) Identified regions are listed in the “ROI Manager.”

Protocol 4.4.B: Count Cell Nuclei and Related Vesicles with CellProfiler—Level: Basic

The advantage of using CellProfiler for cell counting and measurements is the possibility to easily build a pipeline that can also detect subcellular vesicles and analyze several images automatically. The protocol analyzes a folder containing 3-channels images: the cell nucleus, a cytoplasmic staining, and vesicles.

  1. 1.

    Launch CellProfiler and set the input/output folder (explained in protocol 4.3.B); set LoadImages.

  2. 2.

    Add IdentifyPrimaryObjects to find the nuclei in the first channel: start with a typical diameter between 80 and 150, do not discard objects outside this range, and set a manual threshold of 0.005 in the advanced options.

  3. 3.

    Add IdentifySecondaryObjects to get the cytoplasm by propagation in the second channel (using primary objects’ name as “input”).

  4. 4.

    Add IdentifyPrimaryObjects to find the vesicles in the third channel: choose typical diameter between 3 and 40, and use a manual threshold of 0.0025 in the advanced options. Comment: the choice of good parameters for vesicle segmentation might result a bit tricky, due to the diversity of structure size and intensity in the same sample; the indicated parameters work to count the smaller and brighter ones. Opportune retuning allows the detection of bigger ones.

  5. 5.

    Add RelateObjects to find “Childs” vesicles per each “Parent” Nucleus.

  6. 6.

    Add MeasureObjectsIntensity to get statistics of the vesicles (use the third channel image).

  7. 7.

    Add ExportToSpreadSheet to save all the results to a .csv file.

Protocol 4.4.C: Use Coding to Count and Measure Cells—Level: Advanced

The approach described in this protocol may look more difficult compared to the use of a GUI, but represents the beginning of a walk along “the road to freedom”. Learning new coding skills opens much more possibilities than having several software GUIs available. The following script includes a minimal number of lines in python to: apply a threshold, find connected components, and print their properties (intensity and size). It can be run in a JupyterLab notebook, and uses the packages: scikit-image [31], pandas [32], matplotlib [33]. The outputs of the script should be similar to Fig. 4.11.

figure a
Fig. 4.11
figure 11

Counting nuclei and measuring properties in python. (a) Cell nuclei in a fluorescence image are (b) segmented and labeled, (c) measured finding the area, MFI, and position with a python script

4.4.2 Questions and Answers

Questions

  • 4.4.2.1 Are there segmentation “rules” other than threshold?

  • 4.4.2.2 What if I do not know the properties of my objects/cells of interest?

Answers

  • 4.4.2.1 Yes, threshold is the most immediate method, but also patterns and statistics can be employed.

  • 4.4.2.2 Use dimensionality reduction methods such as PCA or machine learning (unsupervised classification).

4.5 Publication of Images and Data

The presentation of images for the purpose of publication requires careful dataset handling, which should not be processed by changing pixel intensities or by altering proportions (both spatial and signal related).

The most diffused graphics software allows the inadvertent use of operations such as smoothing, change of gamma, resizing, and compression to 8-bit from higher bit depth. All these manipulations may result in compromised data that contain altered scientific information.

Available guidelines for safe image visualization can avoid the spread of wrong image manipulation, scientific misconduct, or trivial errors. These include the obvious avoidance of copy-paste procedures, spatial transformations, use of “lossy” compressed saving formats such as “.jpeg”, the limitation to simple linear transformation (B&C levels and not the nonlinear gamma), and mainly the inclusion of references for image interpretation, such as calibration bars and history log [34, 35].

All the guidelines constitute the essence of the “Image data integrity.”Footnote 12

Additionally, image analysis should always accompany any claim regarding the formulation of scientific hypotheses based on image data. Results can be easily presented with scatterplots or histograms with specific color combinations that help readers to be unbiased (e.g. using color-blind friendly representationsFootnote 13).

Figure preparation can be done using several tools like FIJI, Inkscape, Adobe Illustrator, or OMERO Figure.

The protocol 4.5.A describes how to assemble a panel as in Fig. 4.1.

4.5.1 Questions and Answers

Questions

  • 4.5.1.1 Do I need to use the same visualization settings for all the images? Is it always possible?

  • 4.5.1.2 Which method should be used in order to be consistent in the analysis and figure preparation that include multiple images?

Answers

  • 5.1.1 Yes, in the case of sample comparison. If images have been captured from samples that are not comparable, then different B&C can be used, but the visualization range used in both cases should be stated and/or shown with a calibration bar.

  • 5.1.2 Log every step (for example if the software plugin is recordable like in FIJI macro recorder) or use scripting.

Protocol 4.5.A: Figure Preparation in ImageJ/FIJI—Level: Basic

  1. 1.

    Import the multichannel image in FIJI using “Bio-Formats,” as hyperstack and composite.

  2. 2.

    Adjust B&C in every channel, as “Composite”: “Image/Adjust/Brightness/Contrast…”.

  3. 3.

    Get an RGB image showing the composite: “Image/Type/RGB Color”. Show the scalebar with (“Analyze/Tools/Scale Bar…”); use “overlay” option if unsure about the scalebar aesthetics.

  4. 4.

    Convert to RGB the original stack in single “Color” view and concatenate it with the composite “Image/Stacks/Tools/Concatenate…”.

  5. 5.

    Make a montage: “Image/Stacks/Make Montage…”.

Take-Home Message

  • Image acquisition is possible by using a variety of techniques, which determine the type of datasets obtained and the extracted scientific content.

  • Image analysis should be done consistently across the data and with benchmarked workflows.

  • Science requires a fair presentation of data and openness about the outcome of image-based experiments. It is incredibly important to log every step of the process: from acquisition, through image analysis, until results presentation.