Learning to Segment Neurons with Non-local Quality Measures

  • Thorben Kroeger
  • Shawn Mikula
  • Winfried Denk
  • Ullrich Koethe
  • Fred A. Hamprecht
Part of the Lecture Notes in Computer Science book series (LNCS, volume 8150)


Segmentation schemes such as hierarchical region merging or correllation clustering rely on edge weights between adjacent (super-)voxels. The quality of these edge weights directly affects the quality of the resulting segmentations. Unstructured learning methods seek to minimize the classification error on individual edges. This ignores that a few local mistakes (tiny boundary gaps) can cause catastrophic global segmentation errors. Boundary evidence learning should therefore optimize structured quality criteria such as Rand Error or Variation of Information. We present the first structured learning scheme using a structured loss function; and we introduce a new hierarchical scheme that allows to approximately solve the NP hard prediction problem even for huge volume images. The value of these contributions is demonstrated on two challenging neural circuit reconstruction problems in serial sectioning electron microscopic images with billions of voxels. Our contributions lead to a partitioning quality that improves over the current state of the art.


Random Forest Edge Weight Connected Component Label Regression Forest Mouse Dataset 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Andres, B., Kroeger, T., Briggman, K.L., Denk, W., Korogod, N., Knott, G., Koethe, U., Hamprecht, F.A.: Globally optimal closed-surface segmentation for connectomics. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part III. LNCS, vol. 7574, pp. 778–791. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  2. 2.
    Arbelaez, P., Maire, M., Fowlkes, C., Malik, J.: Contour detection and hierarchical image segmentation. PAMI 33(5), 898–916 (2011)CrossRefGoogle Scholar
  3. 3.
    Bansal, N., Blum, A., Chawla, S.: Correlation clustering. Machine Learning 56(1), 89–113 (2004)CrossRefzbMATHGoogle Scholar
  4. 4.
    Finley, T., Joachims, T.: Supervised clustering with support vector machines. In: Proceedings of the 22nd International Conference on Machine Learning, pp. 217–224. ACM (2005)Google Scholar
  5. 5.
    Chopra, S., Rao, M.: The partition problem. Mathematical Programming 59(1), 87–115 (1993)MathSciNetCrossRefzbMATHGoogle Scholar
  6. 6.
    Vitaladevuni, S., Basri, R.: Co-clustering of image segments using convex optimization applied to EM neuronal reconstruction. In: CVPR (2010)Google Scholar
  7. 7.
    Kim, S., Nowozin, S., Kohli, P.: Task-specific image partitioning. IEEE TIP 22(2), 488–500 (2013)MathSciNetGoogle Scholar
  8. 8.
    Andres, B., Kappes, J.H., Beier, T., Köthe, U., Hamprecht, F.A.: Probabilistic image segmentation with closedness constraints. In: ICCV (2011)Google Scholar
  9. 9.
    Kappes, J.H., Speth, M., Andres, B., Reinelt, G., Schn, C.: Globally optimal image partitioning by multicuts. In: Boykov, Y., Kahl, F., Lempitsky, V., Schmidt, F.R. (eds.) EMMCVPR 2011. LNCS, vol. 6819, pp. 31–44. Springer, Heidelberg (2011)Google Scholar
  10. 10.
    Bagon, S., Galun, M.: A multiscale framework for challenging discrete optimization. In: NIPS Workshop on Optimization for Machine Learning (2012)Google Scholar
  11. 11.
    Yarkony, J., Ihler, A., Fowlkes, C.C.: Fast planar correlation clustering for image segmentation. In: Fitzgibbon, A., Lazebnik, S., Perona, P., Sato, Y., Schmid, C. (eds.) ECCV 2012, Part VI. LNCS, vol. 7577, pp. 568–581. Springer, Heidelberg (2012)CrossRefGoogle Scholar
  12. 12.
    Turaga, S., Briggman, K., Helmstaedter, M., Denk, W., Seung, H.: Maximin affinity learning of image segmentation. NIPS (2009)Google Scholar
  13. 13.
    Jain, V., Turaga, S.C., Briggman, K.L., Helmstaedter, M.N., Denk, W., Seung, H.S.: Learning to agglomerate superpixel hierarchies. NIPS 2(5) (2011)Google Scholar
  14. 14.
    Tarlow, D., Zemel, R.S.: Structured output learning with high order loss functions. In: AISTATS (2012)Google Scholar
  15. 15.
    Ranjbar, M., Mori, G., Wang, Y.: Optimizing complex loss functions in structured prediction. In: Daniilidis, K., Maragos, P., Paragios, N. (eds.) ECCV 2010, Part II. LNCS, vol. 6312, pp. 580–593. Springer, Heidelberg (2010)CrossRefGoogle Scholar
  16. 16.
    Nowozin, S., Lampert, C.: Structured learning and prediction in computer vision. Now Publishers Inc. (2011)Google Scholar
  17. 17.
    Hatcher, A.: Algebraic Topology. Cambridge Univ. Press (2002)Google Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2013

Authors and Affiliations

  • Thorben Kroeger
    • 1
  • Shawn Mikula
    • 2
  • Winfried Denk
    • 2
  • Ullrich Koethe
    • 1
  • Fred A. Hamprecht
    • 1
  1. 1.HCIUniversity of HeidelbergGermany
  2. 2.MPI for Medical ResearchHeidelbergGermany

Personalised recommendations