Since many years, medical image processing and analysis is an exciting and evolving field. Significant progress is currently being made in the computer-aided processing and automatic analysis of medical image data, and the limits of feasibility are being expanded every day. Since the advent of deep learning, numerous breakthroughs were achieved and new results emerge in rapid succession. For this reason, the German Workshop on Medical Image Progressing “Bildverarbeitung für die Medizin” (BVM) continues its + 20-year tradition to provide an annually forum for the presentation and discussion of the latest algorithms, systems, and applications in this field. The aim is to deepen the interaction between scientists, industry, and users as well as the explicit inclusion of young scientists who report on their bachelor, master, doctoral, and habilitation projects. BVM very successfully held previous workshops in Aachen, Berlin, Erlangen, Freiburg, Hamburg, Heidelberg, Leipzig, Lübeck, and Munich and became a central interdisciplinary forum for the German medical image computing community.

In 2019, BVM was held in Lübeck. It was a great pleasure to welcome many participants from different countries and had an excellent panel of invited speakers from all over Europe. During the preparation of the conference, 67 original works underwent a critical review process. As a result, 56 were accepted for presentation at the conference. Out of these, the best 20 accepted original works were invited to expand their conference paper to be included into this special edition. Sixteen of them were actually submitted, and after another round of rigorous reviews and major revisions, nine of these papers were accepted in time for this special issue. This year, five out of nine papers address topics of machine or deep learning in medical imaging which is in line with the growing importance of machine learning and pattern recognition in our field.

Gessert et al. present deep transfer learning methods for colon cancer classification in images from confocal laser microscopes. They demonstrate that also in this application transfer learning outperforms training from scratch. However, there is no general rule on how to perform transfer learning. With several task- and dataset-specific adjustments, further increases in performance are obtained.

In the contribution by Droigk et al., multi-resolution vessel detection in magnetic particle imaging is addressed. In their work, they combine a multi-resolution wavelet reconstruction with Gaussian mixture models for foreground segmentation. The results demonstrate an increased structural similarity index for the reconstructions using the wavelet method.

Roser et al. investigate pitfalls in interventional X-ray dose assessment. In particular, the study aims to investigate the effect of exact patient, dosimeter, and X-ray imaging positioning on the local X-ray dose prediction. The results indicate that organ doses may be underestimated by 12–20%. Hence, patient-specific modeling and risk assessment are desirable.

Human three-dimensional (3D) pose estimation from multiple two-dimensional (2D) cameras in the operating room is the topic of the paper by Hansen et al. In their frame work, they fuse multiple depth cameras using strong pose priors. This approach is compared to several baselines using the Multi-View Operating Room (MVOR) dataset and shows promising results using depth data only.

Kath et al. investigate virtual reality simulation of radio-frequency ablations for various needle geometries. In their approach, they achieve real-time capability using Nvidia’s Compute Unified Device Architecture (CUDA) framework. Results indicate that the method is able to increase safety in real-time ablation planning by avoiding overestimation of the ablated tissue death zone.

In the work by Ritter et al., hyperparameter optimization is used for image analysis in prostate tissue images and live cell data. With their HyperHyper package, they are able to determine hyperparameters robustly for various image processing tasks increasing their performance.

Hering et al. propose memory-efficient 2.5D convolutional transformer networks for multi-modal deformable registration with weak label supervision. The method is applied to whole-heart CT and MRI scans. With their novel volume change control term, the method is able to outperform state-of-the-art unsupervised discrete registration frameworks.

FAConstructor—an interactive tool for the geometric modeling of nerve fiber architectures in the brain is presented by Reuter et al. It allows users to define fiber pathways with interpolation methods or parametric functions while providing visual feedback. FAConstructor furthermore also allows interactive editing of existing fiber models in a reasonable amount of time.

Breininger et al. present a method for simultaneous reconstruction of multiple stiff wires from a single X-ray projection. It uses a real and a virtual view to extract the 2D structure of the wires. Based on these two views, the 3D wires are then reconstructed using epipolar geometry. 3D reconstruction errors lie in the range of 4 mm.

From these excellent contributions, we can observe that deep learning is becoming more and more relevant for our field. Yet, traditional algorithms and image processing still demonstrate high—and in some cases even superior—results. Additionally, there are still emerging technologies such as magnetic particle imaging that might be developed further to bring even more game-changing potential to the field.