Neuroinformatics

, Volume 9, Issue 2, pp 103–105

Proof-editing is the Bottleneck Of 3D Neuron Reconstruction: The Problem and Solutions

Authors

    • Janelia Farm Research Campus, Howard Hughes Medical Institute
  • Fuhui Long
    • Janelia Farm Research Campus, Howard Hughes Medical Institute
  • Ting Zhao
    • Janelia Farm Research Campus, Howard Hughes Medical Institute
  • Eugene Myers
    • Janelia Farm Research Campus, Howard Hughes Medical Institute
Commentary

DOI: 10.1007/s12021-010-9090-x

Cite this article as:
Peng, H., Long, F., Zhao, T. et al. Neuroinform (2011) 9: 103. doi:10.1007/s12021-010-9090-x

Keywords

Neuron reconstructionProofreadingEditingV3DWYSIWYG

3D neuron reconstruction is a challenging problem. Although much effort has been made in automatic tracing individual neurons, not much attention has been given to proof-editing, which is the real bottleneck. In the following we suggest two possible solutions, (a) a high-speed manual proof-editor based on effective 3D WYSIWYG (“what you see is what you get”) techniques provided by combining the V3D system and other automatic error analyzers, and (b) a fully automatic proof-editor that also predicts the types of errors and the error bounds.

Past and ongoing efforts to reverse engineer a brain usually start by reconstructing, or tracing, neuron models from 3D microscopic images.1, 2, 3, 4, 5, 6, 7, 8 Typically this process consists of two steps, namely (a) automatic tracing of the morphology of a neuron or a neurite bundle, and (b) proof-editing, which means proofreading of a traced neuron structure and correcting its potential errors.

Much effort, such as the recent DIADEM competition,9 has been devoted to develop automatic tracing algorithms, with the hope that the automatically produced reconstructions may significantly reduce the manual work done by a human. In this sense, automatic tracing has been thought as the “bottleneck”. However, results of the DIADEM competition showed that even though automatic tracing algorithms were able to produce overall reasonable reconstructions, during the judging period the domain experts who provided the test datasets still had to spend many hours examining a small portion of the reconstructions and correcting their structural errors. Sometimes searching for errors and correcting them may take a longer time than manual tracing of an entire neuron. For real applications, such as producing a connectome of neurons (or neurite bundles) from potentially terabytes of high-resolution microscopic images, there is a huge gap between what present-day automatic tracing programs can rapidly deliver and the validation of the tracing results as faithful biology. Therefore, the real bottleneck in making meaningful 3D reconstructions is not automatic tracing, but proof-editing.

Proof-editing is unavoidable in most, if not all, cases, particularly when the quality of 3D traced neuron structure becomes a concern. It is also difficult because few existing tools can well handle large-scale 3D reconstructions, overlay them with the original high-resolution image data, effectively visualize the difference of a reconstruction and the respective image voxel information, and edit 3D structures directly in 3D.

Effective 3D proof-editing needs novel multi-dimensional image visualization and manipulation methods. These methods should display microscopic images and their respective reconstructions. More importantly, they must also provide users means to actively interact with both images and reconstructions in the 3D space, directly, and thus edit the traced structure in 3D. This is the 3D-WYSIWYG (“what you see is what you get”) strategy.

However, developing a highly accessible 3D-WYSIWYG tool is challenging. Computer screens display and our eyes perceive only 2D projections of 3D objects. Converting such 2D projections to meaningful 3D locations in the 3D image space would typically need expensive virtual reality devices integrated with high-performance 3D volumetric image rendering. Recently, a new open-source 3D+ image visualization and analysis platform, V3D (see footnote 1), has been able to provide powerful 3D-WYSIWYG functions on an ordinary computer. It allows the pinpointing of any 3D location in a 3D image stack with one or two mouse clicks. These 3D locations are automatically estimated based on the image content, thus eliminating the need for a virtual reality device. More importantly, V3D extends the 3D pinpointing function to allow a user to draw a 3D curve that corresponds to a neuron’s actual shape, with just one mouse stroke. With this system, as long as a user is able to see a potential error of a reconstruction, he/she can fix it right away directly in the 3D space (Fig. 1 and Supplement Video http://penglab.janelia.org/proj/v3d/v3dneuron_2.0_paint_and_trace_profile.mov). This ergonomic system has been used in our recent work to rapidly reconstruct neurite patterns of more than 1,000 Drosophila brain GAL4 lines (unpublished data).
https://static-content.springer.com/image/art%3A10.1007%2Fs12021-010-9090-x/MediaObjects/12021_2010_9090_Fig1_HTML.gif
Fig. 1

V3D WYSIWYG neuron tracing and proof-editing. A user can visualize and manipulate (via a computer mouse) the entire image, or any portion of it, as well as the 3D reconstructions, directly in the 3D space. When any discrepancy between a reconstruction and the respective image content is seen, the user can directly edit (correct) the structure in 3D space using computer mouse strokes. Left: a 3D displayed (maximum intensity projection) 3D image stack of a fruit fly neuron (red) along with a few co-stained neurons (green) and the neuropil stain (blue). The neuron reconstruction (the surface object) of the red channel is partially displayed, thus some voxels in the red channel are also visible. Boxes A and B indicate two local regions of interest defined by 3D landmarks, which can be provided by an automatic tracing-error analyzer. Right: zoom-in 3D views of the two regions of interest. The skeleton of the reconstructed neuron is overlaid on the red channel’s voxel data (now displayed in gray scale for better visualization). Arrow: the control button that will invoke the V3D Object Manager

For a very large image data set that may contain more than tens of giga-voxels and exceed the capability of the underlying computer hardware, even highly optimized 3D-WYSIWYG may be slow. Automatic analyzers that detect potential erroneous locations in a reconstruction will be extremely useful. The Roysam team presented such an analyzer (see footnote 9) in this DIADEM competition. Problematic locations can be coded as a series of 3D landmarks, which are managed by the Object Manager in V3D. The V3D Object Manager (Fig. 1 red arrow) allows a user to open a local 3D volume-viewer of both the image region and the reconstruction around each landmark, to perform rapid proof-editing at that location (Fig. 1 zoom-in boxes).

Efficient proof-editing not only leads to reliable reconstructions, but also helps improve automatic neuron tracing algorithms. We can envision an online machine learning system that starts from a low-quality automatic tracing, but uses the proof-edited results as the training data to repeatedly refine the automatic tracing until it generalizes well and stably on new data. In this way, at the end a minimal amount of manual proof-editing work would be needed.

Manual proof-editing may also be replaced by an automatic proof-editor, which can be viewed as a combination of the above automatic error analyzer with an automated neuron structure editor. Assume we could arrange the already proof-edited reconstructions as a database. For every predicted erroneous location, the automated editor would first search the database for the most similar 3D image patch that has been proof-edited and thus the “optimal” sub-structure is already recorded in the database. Then the editor would correct the current structural error by replacing it using such an optimal candidate sub-structure from the database, or using a probabilistically blended version of several top candidates. Finally the editor will update the respective database records. In this way, a neuron may be traced and proof-edited completely by a machine.

This automated proof-editor may produce different results from manual tracing, make mistakes in editing, or miss some locations. In such cases, it would be very valuable to let the automated proof-editor produce an estimate of the remaining errors in the proof-edited reconstructions. Once the types of errors and the respective error bounds have been specified, the reconstructions would be more useful for sophisticated quantitative analyses.

The discrepancy between automatic and manual reconstructions may be minimized, but may never be reduced to zero, especially when manual reconstructions have errors. For the evaluation purpose, ideally there should be multiple manual reconstructions for the same neuron. Because it is usually very hard to generate the biological ground truth reconstruction for a neuron, a reasonable goal is to produce automatic reconstructions of which the variation is smaller than that of multiple manual reconstructions, and at the same time these automatic reconstructions are more similar to any of these manual reconstructions than these manual results themselves. This means the automatic reconstructions will be more stable (less divergent) and more precise than manual tracing. This criterion has been used in a number of studies, such as in the evaluation of V3D-Neuron tracing system (see footnote 1) and of one of its underlying automatic tracing algorithms, the graph-augmented deformable model (see footnote 2).

In summary, we believe there are two ways to break the neuron reconstruction bottleneck. (1) Use highly ergonomic 3D tracing and proof-editing tools. The V3D WYSIWYG functions provide a good option in our experience. This approach could be enhanced by automatic error-analyzers. (2) Develop highly efficient automated proof-editors, which should also produce estimated error bounds. Of course, when such automated proof-editors become realized, the difference between automatic tracing and proof-editing may become insignificant, or can be better studied within the same framework.

Information Sharing Statement

The open-source V3D software can be freely downloaded from http://penglab.janelia.org/proj/v3d.

Footnotes
1

Peng H, Ruan Z, Long F, Simpson JH, Myers EW, (2010) "V3D enables real-time 3D visualization and quantitative analysis of large-scale biological image data sets," Nature Biotechnology, 28(4): 348-353.

 
2

Peng H, Ruan Z, Atasoy D, Sternson S, (2010) "Automatic reconstruction of 3D neuron structures using a graph-augmented deformable model," Bioinformatics, 26(12): i38-i46.

 
3

Bas E, Erdogmus D (2011) Principal Curves as Skeletons of Tubular Objects: Locally Characterizing the Structures of Axons. Neuroinformatics, doi:10.1007/s12021-011-9105-2.

 
4

Chothani P, Mehta V, Stepanyants A (2011) Automated tracing of neurites from light microscopy stacks of images. Neuroinformatics, doi:10.1007/s12021-011-9121-2.

 
5

Türetken E, González G, Blum C, Fua P (2011) Automated Reconstruction of Dendritic and Axonal Trees by Global Optimization with Geometric Priors. Neuroinformatics, doi:10.1007/s12021-011-9122-1.

 
6

Zhao T, Xie J, Amat F, Clack N, Ahammad P, Peng H, Long F, Myers E (2011) Automated Reconstruction of Neuronal Morphology Based on Local Geometrical and Global Structural Models. Neuroinformatics, doi:10.1007/s12021-011-9120-3.

 
7

Wang Y, Narayanaswamy A, Tsai C-L, Roysam B (2011) A Broadly Applicable 3-D Neuron Tracing Method Based on Open-Curve Snake. Neuroinformatics, doi:10.1007/s12021-011-9110-5.

 
8

Narayanaswamy A, Wang Y, Roysam B (2011) 3-D Image Pre-processing Algorithms for Improved Automated Tracing of Neuronal Arbors. Neuroinformatics, doi:10.1007/s12021-011-9116-z.

 
9

Luisi J, Narayanaswamy A, Galbreath Z, Roysam B, (2011) The FARSIGHT Trace Editor: An Open Source Tool for 3-D Inspection and Efficient Pattern Analysis Aided Editing of Automated Neuronal Reconstructions. Neuroinformatics, doi:10.1007/s12021-011-9115-0.

 

Acknowledgment

We thank Chris Zugates, Fernando Amat, and Margaret Jefferies for reading and commenting on a draft of the manuscript.

Copyright information

© Springer Science+Business Media, LLC 2010