In recent years, deep neural networks have achieved remarkable performance in several tasks related to perception and control. Nevertheless, their usage in safety-critical systems is quite problematic due to a number of reasons.

First, during inference, the execution time of a neural network can be subject to high variations, which may be caused by the specific computing platform, hardware accelerator, or the framework used to manage the execution of the various network nodes. Second, neural models have been shown to be prone to adversarial attacks, which can induce a wrong prediction through imperceptible perturbations applied to the input. Such adversarial attacks have been shown to be also applicable in the real world through properly crafted patches that can be printed and placed on physical objects, so without accessing the vision system. Third, the prediction of a neural model can also be compromised by inputs that are out of the typical distribution of the data samples used during training. Detecting or neutralizing such adversarial attacks or out-of-distribution inputs may have a significant impact on the overall execution time of the neural model.

This special issue includes three selected articles that cover well three sub-topics related to “Predictable Machine Learning”: timing vs. accuracy tradeoffs, safe and predictable implementation of machine learning algorithms for embedded systems, and predictability of machine learning algorithms in autonomous driving frameworks.

The first article “Scheduling IDK Classifiers with Arbitrary Dependences to Minimize the Expected Time to Successful Classification” by Abdelzaher et al. considers a model specialized for classification-based machine perception problems to trade off accuracy and execution duration to meet timing constraints while maximizing accuracy.

The second article “Extending a predictable machine learning framework with efficient GEMM-based convolution routines” by De Albuquerque Silva et al. focuses on the safe real-time implementation of the inference phase of feed-forward deep neural networks on embedded platforms, with the objective of being compliant with avionics requirements.

The third article “Main Sources of Variability and Non-Determinism in AD Software: Taxonomy and Prospects to Handle Them” by Alcon et al. analyzes the source of variability and non-determinism in autonomous driving software, with a focus on the Apollo autonomous driving framework.

We would like to thank all the authors for submitting their excellent work to this special issue and the reviewers for providing their valuable feedback. We thank the Editor-in-Chief of Springer Real-Time Systems, Luis Almeida, and Springer’s editorial assistants for their support throughout the organization of the special issue and in managing the reviewing process.


Daniel Casini

Scuola Superiore Sant’Anna, Italy

daniel.casini@santannapisa.it


Giorgio Buttazzo

Scuola Superiore Sant’Anna, Italy

giorgio.buttazzo@santannapisa.it

Guest Editors