1 Introduction

Verification and validation assessment of process modeling and simulation is increasing in importance in various areas of application. They include complex mechatronic and biomechanical tasks with especially strict requirements on numerical accuracy and performance. However, engineers lack precise knowledge regarding the process and its input data. This lack of knowledge and the inherent inexactness in measurement make such general verification and validation cycle tasks as design of a formal model and definition of relevant parameters and their ranges difficult to complete.

To assess how reliable a system is, verification and validation analysts have to deal with uncertainty. There are two types of uncertainty: aleatory and epistemic.

Aleatory uncertainty refers to variability similar to that arising in games of chance. It cannot be reduced by further empirical study. Epistemic (reducible) uncertainty refers to the incertitude resulting from lack of knowledge. An example is the absence of evidence about the probability distribution of a parameter. Here, interval methods provide a possible solution strategy.

Another option, mostly discussed in the context of risk analysis, is to use interval-valued probabilities and imprecisely specified probability distributions. The probability of an event can be specified as an interval; probability bounds analysis propagates constraints on a distribution function through mathematical operations. In a more general setting, the theory of imprecise probabilities is a powerful conceptual framework in which uncertainty is represented by closed, convex sets of probability distributions. Bayesian sensitivity analysis or Dempster-Shafer theory are further options.

As the guest editors of this special issue, we are pleased to introduce a collection of articles that were presented and discussed at a Dagstuhl Seminar 11371 (http://www.dagstuhl.de/11371) “Uncertainty modeling and analysis with intervals—Foundations, tools, applications”, which took place September 11–16, 2011. The major emphasis of the seminar was on modeling and analyzing uncertainties and propagating them through application systems by using interval arithmetic.

This special issue collects twelve papers which present various aspects of the investigations based on interval arithmetic. On one hand, there are theoretical and methodological contributions (Q. Fazal and A. Neumaier, W. Lodwick and O. Jenkins, E. Popova and M. Hladik, F. Zapata et al.). On the other hand, there are presentations of software frameworks for verified scientific computing and modelling of complex uncertain systems (O. Heimlich et al., M. Zimmer et al.), as well as some applications (T. Dötschel et al., S. Kiel et al., P. Shao and N. Stewart). There is also a group of papers which compare or combine interval and probabilistic approaches (M. Beer and V. Kreinovich, G. Rebner et al., Y. Wang).

2 Summary of the special issue

Twelve papers are included in this special issue after a strict peer reviewing process which have ensured the paper quality.

Two papers deal with theoretical and methodological aspects of verified computing. Q. Fazal and A. Neumaier propose an interesting approach for optimization-based computation of verified state enclosures for initial value problems of ordinary differential equations with uncertain initial states in their paper “Error bound for initial value problems by optimization”. The authors developed a new solver DIVIS (Differential Inequality based Validated IVP Solver) to obtain tight error bounds for numerical solutions of initial value problems. The basic idea is to compute the defect estimates of initial value problems by using outer ellipsoidal approximation and then by applying differential inequalities validated state enclosures for IVPs. The convergence of the method depends upon a suitable choice of a suitable preconditioner. The scheme is implemented in MATLAB and AMPL and the resulting enclosures are compared to those realized with current software tools like VALENCIA-IVP, VNODE-LP, and VSPODE. In “Outer enclosures to the parametric AE solution set”, E. Popova and M. Hladík consider linear algebraic systems, where the elements of the matrix and of the right-hand side vector depend linearly on interval parameters. Parametric AE solution sets, which are defined by universally and existentially quantified parameters so that the former precede the latter, are studied. The authors propose and analyze three methods for finding outer estimation of such parametric AE solution sets.

Several papers deal with the software aspects of verified computations. The paper “Variants of the General Interval Power Function” by O. Heimlich, M. Nehmeier, and J. Wolff von Gudenberg deals with floating-point and interval versions of variants of the power function pow(xy) = x y and seeks to establish a reference implementation for the IEEE Interval arithmetic standard P 1788 which is actually under discussion. Various choices for x and y can be made including the case of x positive, x negative and y rational and integer. Three different variants of strict interval extensions for mathematical and floating-point intervals are presented and a reference implementation in INTLAB shows that the results can serve for practical use. In the paper “An overview of C-XSC as a tool for interval arithmetic and its application in computing verified uncertain probabilistic models under Dempster-Shafer theory” M. Zimmer, G. Rebner and W. Krämer give an overview of the C++ library C-XSC, which provides many useful data types and functions for (verified) scientific computing by interval arithmetic. The authors focus on some recent new features that significantly broaden the range of uses of C-XSC, making it more attractive for the use in high performance computing. In addition, a new interface between C-XSC and a MATLAB toolbox, which combines Dempster-Shafer theory with verified interval arithmetic to model uncertain data, is presented.

Several papers deal with applications. In the paper “Thermal behavior of high-temperature fuel cells: reliable parameter identification and interval-based sliding mode control” T. Dötschel, E. Auer, A. Rauh and H. Aschemann discuss the application of various interval methods towards a robust modeling of the thermal subsystem of a high-temperature solid oxide fuel cell. Generalized derivatives are used to cope with the case of non-smooth initial value problems. The paper “Verified distance computation between non-convex superquadrics using hierarchical space decomposition structures”, authored by S. Kiel, W. Luther and E. Dyllong, deals with an application of verified computing to automatic surgery assistance systems for total hip replacement. The authors present their unified framework for verified geometric computations and its advances. They explain how to apply hierarchical decompositions to parametric surfaces, how to incorporate interval contractors into a hierarchical decomposition, present and compare many methods to compute certified Euclidean distances between two three-dimensional objects. Under the title “Managing uncertainty and discontinuous condition numbers in finite-precision geometric computation” the authors P. Shao and N. Stewart are concerned with the robust computation of geometric operations using floating-point or interval arithmetic in the case when there is uncertainty in the original input data defining the geometric objects represented by subdivision surfaces. The stringent requirement that the computed answer should have the same topological form as the true solution implies that the problem may be ill-conditioned. An innovative technique to recognize these situations and to ask for more information is proposed. The ultimate goal is to provide an algorithm that traps only when a perturbation of the problem data, of size smaller than the uncertainty in the data, would cause a change in the topological form.

Two papers analyze possible generalizations of interval-based verified techniques. The paper “Constrained intervals and interval spaces” by W. A. Lodwick and O. A. Jenkins deals with the fact that while the traditional interval techniques provide versified enclosures for the range of a given function over given intervals, these enclosures often have excess width. One of the main reasons for this excess width is that on each intermediate stage, we only take into account the range of the corresponding intermediate value and ignore the information about how this value depends on the inputs; because of this, we ignore the relations between the intermediate results, relations that can decrease the excess width. There exist techniques—such as affine and Taylor arithmetic—that take this dependence into account. The authors show that these techniques can be viewed as computable approximations to a general mathematical scheme of constraint intervals, a scheme which, ideally, can lead to exact description of dependencies and thus, to the exact ranges. The paper “Orders on intervals over partially ordered sets: Extending Allen’s algebra and interval graph results” by F. Zapata, V. Kreinovich, C. Joslyn, and E. Hogan pursues a different generalization of intervals—namely, an extension of interval orders and Allen’s (interval) algebra from the totally-ordered set of real numbers to partially-ordered sets. The authors describe all possible ordering relations between intervals in a partially ordered space. Their findings generalize a similar result by Allen for linearly ordered space, a result which seems to be well-cited and actively used in Artificial Intelligence.

Finally, several papers deal with a combination of probabilistic and interval uncertainty, a combination which is necessary to produce verified estimates for probabilistic systems. The paper “Verified stochastic methods: Markov set-chains and dependency modeling of mean and standard deviation” by G. Rebner, M. Beer, E. Auer, and M. Stein deals with Markov chains—probabilistic versions of discrete dynamical systems. In the traditional Markov chain, once we know the starting state of the system, we can use the known transition probabilities to predict the probabilities of different future states. In practice, however, we usually only know the transition probabilities with uncertainty—e.g., we only know the bounds on each of these probabilities. Different probabilities within these bounds lead, in general, to different probabilities of future states; the goal is then to describe, for each future moment of time, the set of all possible combination of corresponding future probabilities. The paper describes novel verified algorithms for describing this set and for estimating the mean and standard deviation of the resulting probability distribution. The paper “Reliable kinetic Monte-Carlo simulation based on random set sampling” by Y. Wang deals with more complex situations, when analytical expressions for future probabilities are not possible and, thus, Monte-Carlo techniques are needed to predict the future probabilities. Specifically, the paper deals with rare events such as rare chemical reactions or phase transitions; one of the most computationally efficient techniques for simulating such events is the kinetic Monte Carlo (KMC) method. The paper shows how to modify KMC techniques so as to take into account uncertainty—e.g., the fact that we know the reaction rates only with interval uncertainty. Depending on the random factors, the set characterizing the initial and parametric uncertainty is transformed into different sets; thus, we have different sets of future states, with different probabilities—a structure known as a random set. The author shows that sampling based on the random set—instead if the usual Monte-Carlo sampling based on random numbers—improves the robustness of the KMC method. Finally, in the situation when we do not have enough observations to accurately determine the corresponding probability distribution, the paper “Interval or moments: which carry more information?” by M. Beer and V. Kreinovich tries to answer the question: which of the two approaches, namely the interval or the first two moments can offer more information. The authors compare, from a Shannon information-theoretic point of view, these two approaches and reach by proof the conclusion that, if 95 % (or less) confidence intervals are sought, intervals should be selected, while the two first moments (expectation and variance) should be selected otherwise.