Review

Introduction

In the current ‘big data’ era [1], the magnitude of data explosion in life science research is undeniable. The biomedical literature currently includes about 27 million abstracts in PubMed and about 3.5 million full text articles in PubMed Central. Additionally, there are more than 300 established biological databases that store information about various biological entities (bioentities) and their associations. Obvious examples include: diseases, proteins, genes, chemicals, pathways, small molecules, ontologies, sequences, structures and expression data. In the past 250 years, only 1.2 million eukaryotic species (out of the approximately 8.8 million that are estimated to be present on earth) [2] have been identified and taxonomically classified in the Catalog of Life and the World Register of Marine Species [3]. The sequencing of the first human genome (2002) took 13 years and cost over $3 million to complete. Although the cost for de novo assembly of a new genome to acceptable coverage is still high, probably at least $40,000, we can now resequence a human genome for $1000 and can generate more than 320 genomes per week [4]. Notably, few species have been fully sequenced, and a large fraction of their gene function is not fully understood or still remains completely unknown [5]. The human genome is 3.3 billion base pairs in length and consists of over 20,000 human coding genes organized into 23 pairs of chromosomes [6, 7]. Today over 60,000 solved protein structures are hosted in the Protein Data Bank [8]. Nevertheless, many of the protein functions remain unknown or are partially understood.

Shifting away from basic research to applied sciences, personalized medicine is on the cusp of a revolution allowing the customization of healthcare by tailoring decisions, practices and/or products to the individual patient. To this end, such information should be accompanied by medical history and digital images and should guarantee a high level of privacy. The efficiency and security of distributed cloud computing systems for medical health record organization, storage and handling will be one of the big challenges during the coming years.

Information overload, data interconnectivity, high dimensionality of data and pattern extraction also pose major hurdles. Visualization is one way of coping with such data complexity. Implementation of efficient visualization technologies is necessary not only to present the known but to also reveal the unknown, allowing inference of conclusions, ideas and concepts [9]. Here we focus on visualization advances in the fields of network and systems biology, present the state-of-the-art tools and provide an overview of the technological advances over time, gaining insights into what to expect in the future of visualization in the life sciences.

In the section on network biology below, we discuss widely used tools related to graph visualization and analysis, we comment on the various network types that often appear in the field of biology and we summarize the strengths of the tools, along with their citation trends over time. In this section we also distinguish between tools for network analysis and tools designed for pathway analysis and visualization. In a section on genomic visualization, we follow the same approach by distinguishing between tools designed for genome browsing and visualization, genome assembly, genome alignments and genome comparisons. Finally, in a section on visualization and analysis of expression data, we distinguish between tree viewers and tools implemented for multivariate analysis.

Network biology visualization

In the field of systems biology, we often meet network representations in which bioentities are interconnected with each other. In such graphs, each node represents a bioentity and edges (connections) represent the associations between them [10]. These graphs can be weighted, unweighted, directed or undirected. Among the various networks types within the field, some of the most widely used are protein-protein interaction networks, literature-based co-occurrence networks, metabolic/biochemical, signal transduction, gene regulatory and gene co-expression networks [1113]. As new technological advances and high-throughput techniques come to the forefront every few years, such networks can increase dramatically in size and complexity, and therefore more efficient algorithms for analysis and visualization are necessary. Notably, a network consisting of a hundred nodes and connections is incomprehensible and impossible for a human to visually analyze. For example, techniques such as tandem affinity purification (TAP) [14], yeast two hybrid (Y2H) [15] and mass spectrometry [16] can nowadays generate a significant fraction of the physical interactions of a proteome. As network biology evolves over time, we indicate standard procedures that were developed over the past 20 years and highlight key tools and methodologies that had a crucial role in this maturation process (Fig. 1).

Fig. 1
figure 1

Visualization for network biology. a Timeline of the emergence of relevant technologies and concepts. b A simple drawing of an undirected unweighted graph. c A 2D representation of a yeast protein-protein interaction network visualized in Cytoscape (left) and potential protein complexes identified by the MCL algorithm from that network (right). d A 3D view of a protein-protein interaction network visualized by BiolayoutExpress3D. e A multilayered network integrating different types of data visualized by Arena3D. f A hive plot view of a network in which nodes are mapped to and positioned on radially distributed linear axes. g Visualization of network changes over time. h Part of lung cancer pathway visualized by iPath. i Remote navigation and control of networks by hand gestures. j Integration and control of 3D networks using VR devices

In the 1990s, two-dimensional (2D) static graph layouts were developed for visualizing networks. Topological analysis, layout and clustering were pre-calculated and results were captured in a single static image. Clustering analysis was performed to detect cliques or highly connected regions within a graph, layout techniques such as Fruchterman-Reingold [17] were implemented to place nodes in positions where the crossovers between the edges are minimized and topological analysis was used for detecting important nodes of the network such as hubs or nodes with high betweenness centrality. The typical visual encoding consisted of using arrows for directed graphs, adjusting the thickness of an edge to show the importance of a connection, using the same color for nodes that belong to the same cluster or modifying the node’s size to show its topological features, such as its neighbor connectivity. As integrative biology and high-throughput techniques advanced over the years, the necessity to move away from static images and add interactivity and navigation for easier data exploration became clearer.

Bridging between analysis and visualization became necessary, and tools that incorporated both increased the standards in the field. In clustering analysis, for example, new computational methods such as MCL [18] and variations [19], Cfinder [20], MCODE [21], Clique [22] and others were applied to biological networks to find highly connected regions of importance. DECAFF [23], SWEMODE [24] or STM [25], for example, were developed to predict protein complexes [26] incorporating graph annotations, whereas others such as DMSP [27], GFA [28] and MATISSE [29] were focused on gene-expression data. Most of these algorithms were command-line-based and only few tools such as jClust [30], GIBA [31], ClusterMaker [32] or NeAT [33] have been developed to integrate data in visual environments. These aforementioned techniques along with others are thoroughly discussed elsewhere [26, 3436].

Although most network visualization tools are standalone applications, they guarantee efficient data exploration and the manipulation of visualization with mouse-hovering supporting actions. Such tools are for example the Pajek [37], Osprey [38], VisANT [39] and others. Next-generation visualization tools took advantage of standard file formats such as BioPAX [40, 41], SBML [42], PSI-MI [43] and CellML [44]; modern, more sophisticated layouts such as Hive-Plots [45]; and the available web services and data integration techniques to directly retrieve and handle information from public repositories on the fly. Functional enrichment of genes using the Gene Ontology (GO) repository [46] is a typical example. Among others, current state-of-the-art tools are the Ondex [47], Cytoscape [48] or Gephi [49], while tools such as iPath [50], PATIKA [51], PathVisio [52] and others [53] are pathway specific.

As biological networks became larger over time, consisting of thousands of nodes and connections, the so-called ‘hairball’ effect, where many nodes are densely connected with each other became very difficult to cope with. A partial solution to this was to shift from 2D representations to three-dimensional (3D) representations. Tools such as Arena3D [54, 55] or BioLayout Express 3D [56] take advantage of 3D space to show data in a virtual 3D universe. BioLayout Express uses whole 3D space to visualize networks, whereas Arena3D implements a multilayered concept to present 2D networks in a stack. Although a 2D network allows immediate visual feedback, a 3D rendering usually requires the user to interact more with the data in a more explorative mode, but can help reveal interesting features potentially hidden in a 2D representation. Although it is debatable whether 3D rendering is better than 2D visualization, hardware acceleration and performance still need to be taken into account when planning 3D visualizations (Fig. 1).

Tables 1 and 2 present currently freely available network and pathway visualization tools and their main characteristics. However, it is not the purpose of this review to perform a deeper comparative analysis of all available 2D and 3D visualization tools, as this is available elsewhere [53, 5759]. Nevertheless, as network biology is gaining ground over the years, we sought to investigate the impact of the current tools in the field. To accomplish this, we tracked the tools that appeared after year 2000 and whose respective articles are indexed by Scopus (Fig. 2). We chose to keep track of the citations of only the first original publication for each tool. Although the number of citations is a reasonable indicator of popularity, it can sometimes be misleading as several tool versions appear in different articles that we have not yet tracked. Nevertheless, some immediate conclusions can be reached, such as that Cytoscape seems to be by far the biggest player for network visualization, as it comes with more than 200 plugins [60] implemented by an active module community (Fig. 1b). Similarly, MapMan [61] and Reactome SkyPainter [62] are the most used tools for pathway visualization (Fig. 2b).

Table 1 Visualization tools for network biology
Table 2 Visualization tools for pathways
Fig. 2
figure 2

Citation trends and key player tools in network biology. a Citations of network visualization tools based on Scopus. b Citations of pathway visualization tools based on Scopus. The numbers of citations of each tool in 2015 are shown after its name

Over the past 5 years, the data visualization field has become more and more competitive. There is a trend away from standalone applications towards the integration of visualization implementations within web browsers. Therefore, libraries and new programming languages have been dedicated to this task (see the final section below). The greater visibility provided by web implementation means that advanced visualization can more easily become available to non-experts and to the broader community. Finally, one of the biggest visualization challenges today is to capture the dynamics of networks and the way in which topological properties change over time [63]. For this, motion or other sophisticated ideas, along with new human-computer interaction (HCI) techniques, should be taken into consideration. Although serious efforts on this are on the way [54, 64, 65], there are still much to expect in the future as HCI techniques and virtual reality (VR) devices (such as Oculus Rift) become cheaper and more advanced over time (Fig. 1).

Visualization in genomics

There remain many open challenges for advanced visualization for genome assemblies, alignments, polymorphisms, variations, synteny, single nucleotide polymorphisms (SNPs), rearrangements and annotations [66, 67]. To better follow progress in the visualization field, we first need to follow the way in which new technologies, questions and trends have been shaped over the years (Fig. 3).

Fig. 3
figure 3

Visualization for genome biology. a Timeline of the emergence of relevant technologies and concepts. b A typical normal human karyotype. c Visualization of BLAST hits and alignment of orthologous genes for the human TP53 gene. d The human TP53 gene and its annotations visualized by the UCSC genome browser. e Visualization of a de novo genome assembly from its DNA fragments. f Examples of balanced and unbalanced genomic rearrangements. g Hypothetical visualization of genomic structural variations across time

Up to the 1990s, local and global pairwise and multiple sequence alignment algorithms such as Smith-Waterman [68], Needleman-Wunsch [69], FASTA [70] and BLAST [71] were the focus of bioinformatics methods development. Multiple sequence alignment tools such as the ClustalW/Clustal X [72], MUSCLE [73], T-Coffee [74] and others [75] used basic visualization schemes, in which sequences were represented as strings placed vertically in stacks. Colors were used to visually encode base conservation and to indicate matching, non-matching and similar nucleotides [76, 77].

Although these tools were successful for small numbers of nucleotide or protein sequences, a question was raised regarding their applicability to whole-genome sequencing and comparison. A few years later (2002), the Sanger (dideoxy) first generation sequencing, particularly capillary approaches, allowed the sequencing of the first whole human genome, consisting of about 3 billion base pairs and over 20,000 human genes [78, 79]. Shortly after that, second-generation (Illumina [80], Roche/454 [81], Biosystems/SOLiD [82]) and third-generation techniques (Helicos BioSciences [83], Pacific Biosciences [84], Oxford Nanopore [85] and Complete Genomics [86]) high-throughput sequencing techniques [8791] allowed the sequencing of a transcriptome, an exome or a whole genome at a much lower cost and within reasonable timeframes.

Projects such as the 1000 Genomes Project, for comprehensive human genetic variation analysis [9294], and the International HapMap Project [9599], for the identification of common genetic variations among people from different countries, are just a few examples of the data explosion that has taken place in the era of comparative genomics, after 2005. Such large-scale genomic datasets necessitate powerful tools to link genomic data to its source genome and across genomes. Therefore, among others [66], widely used standalone and web-based genome browsers were dedicated to information handling, genome visualization, navigation, exploration and integration with annotations from various repositories. At present, many specialized tools for comparative genomic visualization are available and are widely used.

To follow trends in the field, we summarize the tools into four categories: genome alignment visualization tools (Table 3); genome assembly visualization tools (Table 4); genome browsers (Table 5); and tools to directly compare different genomes with each other for efficient detection of SNPs and genomic variations (Table 6). Following the same approach used for network biology (above), we examine the citation progress of the first article that was published for each tool using the Scopus repository (Fig. 4). Consed [76] and Gap [100, 101] seem to be the most widely used assembly viewers, while SAMtools tview [102] is the favorite tool for genomic assembly visualization. In addition, the University of California, Santa Cruz (UCSC) Genome Browser [103], Artemis [104] and Ensembl [105, 106] seem to be the go-to genome browsers, while Circos [107], VISTA [108] and cBio [109] are the most widely used tools for comparative genomics.

Table 3 Visualization tools for genome alignments
Table 4 Visualization tools for assemblies
Table 5 Genome browsers
Table 6 Visualization tools for comparative genomics
Fig. 4
figure 4

Citation trends and key players in genome biology. a Citations of genome alignment visualization tools based on Scopus. b Citations of genome assembly visualization tools based on Scopus. c Citations of genome browsers based on Scopus. d Citations of comparative genomics visualization tools based on Scopus. The numbers of citations of each tool in 2015 are shown after its name

Although tremendous progress has been made in genomic visualization and very large amounts of money have been invested in such projects, genome browsers [110] still need to address major problems. One of the biggest challenges is the integration of data in different formats (such as genomic and clinical data) as society enters the personalized medicine era. Furthermore, navigation at different resolution or granularity levels and smooth scaling are necessary as long as simultaneous comparisons across millions of elements [111] remains a bottleneck. Newer infrastructure and software that allow on-the-fly calculations both in the front end and the back end would definitely be a step forward. Finally, similarly to network biology, time-series data visualization is one of the great challenges. For example, in a hypothetical scenario in which it is required to follow genomic rearrangements over time during tumor development, time-series data visualization would be an invaluable tool. Motion integration and visualization using additional dimensions could be possible solutions. Overall, it would be unrealistic to expect an ideal universal genome browser that serves all the possible purposes in the field.

Visualization and analysis of expression data

Microarrays [112] and RNA sequencing [87] are the two main high-throughput techniques for measuring expression levels of large numbers of genes simultaneously. Both methods are revolutionary as one can simultaneously monitor the effects of certain treatments, diseases and developmental stages on gene expression across time (Fig. 5a) and for multiple transcript isoforms. Although microarrays and RNAseq technologies are comparable to each other [113], the latter tends to dominate, especially as sequencing technologies have improved, and there now are robust statistics to model the particular noise characteristics of RNAseq, particularly for low expression [114]. Microarrays are still cheaper and in some contexts may be more convenient as their analysis is still simpler and requires less computing infrastructure.

Fig. 5
figure 5

Multivariate analyses and visualization. a Timeline of the emergence of relevant technologies and concepts. b Visualization of k-means partitional clustering algorithm. c 3D visualization of a principal component analysis. d Visualization of gene-expression measures across time using parallel coordinates. e Visualization of gene-expression clustering across time. f 2D hierarchical clustering to visualize gene expressions against several time points or conditions. g Hypothetical integration of analyses and expression heatmaps and the control of objects by VR devices

In both cases, a typical analysis procedure is first to normalize experimental and batch differences between samples and then to identify up- and downregulated genes based on a fold-change level when comparing across samples, such as between a healthy and a non-healthy tissue. Statistical approaches are used to assess how reliable fold-change measurements are for each transcript of interest by modeling variation across transcripts and experiments. Subsequently, functional enrichment is performed to identify pathways and biological processes in which the up- and downregulated genes may be involved. Although there are numerous functional enrichment suites [115], David [116], Panther [117] and WebGestalt [118] are among the most widely used.

When gene expression is measured across many time points or conditions so as to observe, for example, the expression patterns following treatment, various analyses can be taken into consideration. Principal component analysis or partitional clustering algorithms such as k-means [119] can be used to group together genes with similar behavior patterns. Scatter-plotting is the typical visualization to represent such groupings. Thus, each point on a plane represents a gene and the closer two genes appear, the more similar they are (Fig. 5b, c).

When one wants to categorize genes with similar behavior patterns across time (Fig. 5d), hierarchical clustering based on expression correlation can be performed. Average linkage, complete linkage, single linkage, neighbor joining [120] and UPGMA [121] are the most widely used methods. In such approaches, an all-against-all distance or correlation matrix that shows the similarities between each pair of genes is required and genes are placed as leaves in a tree hierarchy. The two most widely used correlation metrics for expression data are the Spearman and Pearson correlation metrics. A list of tree viewers for hierarchical clustering visualization is presented in Table 7. A more advanced visualization method is combining trees with heatmaps (Fig. 5e): genes are grouped together according to their expression patterns in a tree hierarchy and the heat map is a graphical representation of individual gene-expression values represented as colors. Darker colors indicate a higher expression value and vice versa. An even more complex visualization of a 2D hierarchical clustering is shown in Fig. 5f, in which genes are clustered based on their expression patterns across several conditions (vertical tree on the left) and conditions are clustered across genes (horizontal tree). The heatmap shows the correlations between gene groups and conditions by allowing the researcher to come to conclusions about whether a group of genes is affected by a set of conditions or not. Heatmaps do, however, have significant drawbacks with regards to color perception. Perception of the color of a cell in a heatmap is shaped by the color of the surrounding cells, so two cells with identical color can look very different depending on their position in the heatmap.

Table 7 Tree viewers and phylogenies

Although RNAseq analysis is still an active field, microarray analysis has matured a lot over the past 15 years and many suites for analyzing such data are currently available (Table 8). To identify the key players in the field of microarray/RNAseq visualization we followed the citation patterns of the available tools from Scopus (Fig. 6). MEGA [122], ARB [123], NJplot [124], Dendroscope [125] and iTOL [126] are the most widely used tree viewers to visualize phylogenies and hierarchical clustering results. MultiExperiment Viewer [127], Genesis [128], GenePattern [129] and EXPANDER [130] are advanced suites that can perform various multivariate analyses such as the ones discussed in this section. Nevertheless, the commercial GeneSpring platform and the entire R/BioConductor framework [131, 132] are mostly used in such analyses.

Table 8 Microarray and RNAseq analysis viewers
Fig. 6
figure 6

Citation trends and tools for gene-expression analysis. a Citations of microarray/RNAseq visualization tools based on Scopus. b Citations of tree viewers based on Scopus. The numbers of citations of each tool in 2015 are shown after its name

Concerning the future of multivariate data visualization, new HCI techniques and VR devices could allow parallel visualizations, analyses and data integration simultaneously (Fig. 5g).

Programming languages and complementary libraries for building visual prototypes

Although the field of biological data visualization has been active for 25 years, it is still evolving rapidly today, as the complexity and the size of results produced by high-throughput approaches increase. Although most of the current software is offered in the form of standalone distributions, a shift towards web visualization is growing. Important features of modern visualization tools include: interactivity; interoperability; efficient data exploration; quick visual data querying; smart visual adjustment for different devices with different dimensions and resolutions; fast panning; fast zooming in or out; multilayered visualization; visual comparison of data; and smart visual data filtering. As functions and libraries implementing these features for standalone applications become available, similar libraries for web visualizations immediately follow. Therefore, in this section we discuss the latest programming languages, libraries and application program interfaces (APIs) that automate and simplify many of the aforementioned features, enabling higher-quality visualization implementations. It is not in the scope of this review to extensively describe all programming language possibilities for data visualization; therefore, we focus on the five languages that are mostly used for high-throughput biological data. Nevertheless, Table 9 summarizes other languages, along with generic and language-specific libraries (for R, Perl and Python), that target specific problems and make the implementation of biological data visualization more practical.

Table 9 Programming languages and libraries to build visual prototype

Processing

‘Processing’ is a programming language and a development platform for writing generative, interactive and animated standalone applications. Basic shapes such as lines, triangles, rectangles and ellipses, inner/outer coloring and basic operations such as transformations, translations, scaling and rotations can be implemented in a single line of code and each shape can be drawn within a canvas of a given dimension and a given refresh rate. It is designed for easier implementations of 2D dynamic visualizations but it also supports 3D rendering, although not optimized. Its core library is now extended by more than 100 other libraries and it is one of the best documented languages in the field. The integrated development environment allows exporting of executable files for all Windows, MacOS and Linux operating systems as well as Java applet .jar files. Finally, it can be used as an excellent educational tool for computer programming fundamentals in a visual context. It is free for download, can easily be plugged in a Java standalone application, and is fully cooperative with the NetBeans and Eclipse environments. Code examples and tutorials can be found at [133].

Processing.js

Java applets were an easy way to run standalone applications within web browsers. This technology has now mainly been abandoned because of security considerations. To avoid JavaScript’s complexity and compensate for applet limitations, Processing.js was implemented, as the sister project of the popular Processing programming language, to allow interactive web visualization. It is a mediator between HTML5 and Processing and is designed to allow visual prototypes, digital arts, interactive animations, educational graphs and so on to run immediately within any HTML5-compatible browser, such as Firefox, Safari, Chrome, Opera or Internet Explorer. No plugins are required and one can code any visualization directly in the Processing language, include it in a web page, and let Processing.js bridge the two technologies. Processing.js brings the best of visual programming to the web, both for Processing and web developers. Code examples and tutorials can be found at [134].

D3

D3 is the main competitor of Processing/Processing.js and has gained ground over recent years. It was initially used to generate scalable vector graphics (SVG). Like Processing.js, it is designed for powerful interactive web visualizations and it comes with its own JavaScript-like syntax. It is a JavaScript library for manipulating document object model objects and a programming interface for HTML, XML and SVG. The idea behind this approach is to load data into a browser and then generate document object model elements based on that data. Subsequently, one can apply data-driven transformations on the document. This avoids proprietary representation and affords extraordinary flexibility. With minimal overhead, D3 is extremely fast and supports large datasets and dynamic behaviors for interaction and animation. D3’s functional style allows code reuse through a diverse collection of components and plugins. It is extensively documented and code examples can be found at [135].

Flash

Adobe Flash was once the industry standard for authoring innovative, interactive content. In conjunction with the platform’s programming language, ActionScript, Flash allows designers to implement dynamic visualization, opening up many possibilities for creativity. Some of the most pioneering, best practice visualizations built in Flash can be found with online news and media sites, introducing interactivity to supplement and enhance the presentation of information. Because of the lack of support for Flash across Apple’s suite of devices and the emergence of competing developments, demanding less computational power, including D3 and HTML5, this technology is now fading.

Java3D

Java 3D is an API, acting as a mediator between OpenGL and Java and enables the creation of standalone 3D graphics applications and internet-based 3D applets. It is easy to use and provides high-level functions for creating and manipulating 3D objects in space and their geometry. Programmers initially create a virtual world and then place any 3D object anywhere in this world. Rotation in three axes, zooming in or out and translation of the whole canvas are functions are offered by default, and the hierarchy of the transformation groups define the 3D transformations that can be applied individually to an object or a set of objects. Java3D code can be compiled under any of the Windows, MacOS and Unix Systems.

Conclusion

The future of biological data visualization

Biological data visualization is a rapidly evolving field. Nevertheless, it is still in its infancy. Hardware acceleration, standardized exchangeable file formats, dimensionality reduction, visual feature selection, multivariate data analyses, interoperability, 3D rendering and visualization of complex data at different resolutions are areas in which great progress has been achieved. Additionally, image processing combined with artificial-intelligence-based pattern recognition, new libraries and programming languages for web visualization, interactivity, visual analytics and visual data retrieval, storing and filtering are still ongoing efforts with remarkable advances over the past years [58, 136, 137]. Today, many of the current visualization tools serve as front ends for very advanced infrastructures dedicated to data manipulation and have driven significant advances in user interfaces. Although the implementation of sophisticated graphical user interfaces is necessary, the effort to minimize back-end calculations is of great importance. Unfortunately, only a limited number of visualization tools today take advantage of libraries designed for parallelization. Multi-threading, for example, allows the distribution of computational tasks in terminals over the network, and CUDA (available on Nvidia graphic cards) allows parallel calculations at multiple graphical processing units.

Despite the fact that multiple screens, light and laser projectors and other technologies partially solve the space limitation problem, HCI techniques are changing the rules of the game and biological data visualization is expected to adjust to these trends in the longer term. 3D control can be achieved without intermediate devices such as mouse, keyboards or touch screens [138] in modern perceptual input systems. Sony’s EyeToy, Playstation Eye and Artag, for example, use non-spatial computer vision to determine hand gestures. Similarly, the Nintendo Wii and Sony Move devices support object manipulation in 3D space. These actions are mediated through the detection of the position in space of physical devices held by the user or, even more impressively, through immediate tracking of the human body or parts of the human body. Equally impressive is the prospect of ocular tracking, one implementation of which has recently been introduced by the VR startup Fove. The Fove headset tracks eye movement and translates into spatial movement or even other types of action within the simulated 3D space. The recently implemented Molecular Control Toolkit [139] is a characteristic example of a new API based on the Kinect and Leap Motion devices (which track the human body and human fingers, respectively) to control molecular graphics such as 3D protein structures. Moreover, large screens, tiled arrays or VR environments should be taken into consideration by programmers and designers as they become more and more affordable over time. A great benefit of such technologies is that they allow the representation of complete datasets without the need for algorithms dedicated to dimensionality reduction, which might lead to information loss.

VR environments are expected to bring a revolution in biological data visualization as one could integrate metabolomics networks and gene expression in virtual worlds, as in MetNet3D [140], or create virtual universes of living systems such as a whole cell [59, 141144]. A visual representation of the whole cell with its components in an immense environment in which users can visually explore the location of molecules and their interaction in space and time could lead to a better understanding of the biological systems. Oculus Rift (which promoted the reemergence of VR devices), Project Morpheus, Google Cardboard, Sony Smart Eyeglass, HTC Vive, Samsung Gear VR, Avegant Glyph, Razer OSVR, Archos VR Headset and Carl Zeiss VR One are state-of-the-art commercial devices that offer VR experiences. All of them overlay the user’s eyesight with some kind of screen and aim to replace the field of view with a digital 3D alternative. Between them, those devices use many technologies and new ideas such as the monitoring of the position of the head (allowing for more axes of movement), the substitution of the VR screen with smartphones (thus harnessing efficient modern smartphone processors), eye tracking and projection of images straight onto the retina.

Approaching the problem from a different angle, Google Glass, HoloLens and Magic Leap offer an augmented reality experience (the latter is rumored to achieve that by projecting a digital light field into the user’s eye). Augmented reality can facilitate the learning process of the biological systems because it builds on exploratory learning. This allows scientists to visualize existing knowledge, whereas the unstructured nature of augmented reality could allow them to construct knowledge themselves by making connections between information and their own experiences or intuition and thus offer novel insights to the studied biological system [145]. Efforts such as the Visible Cell [141] and CELLmicrocosmos have already begun. The Visible Cell project aims to inform advanced in silico studies of cell and molecular organization in 3D using the mammalian cell as a unitary example of an ordered complex system; the CELLmicrocosmos integrative cell modeling and stereoscopic 3D visualization project is a typical example of the use of 3D vision.

Finally, starting from a living entity, the process of digitizing it, visualizing it, placing it in virtual worlds or even recreating it as a physical object using 3D printing is no longer the realm of science fiction. Data visualization and biological data visualization are rapidly developing in parallel with advances in the gaming industry and HCI. These efforts are complementary and there are already strong interactions developing between these fields, something that is expected to become more obvious in the future.