Objective Uncertainty Quantification
When designing an operator to alter the behavior of a physical system, the standard engineering paradigm is to begin with a scientific model describing the system, mathematically characterize a class of operators, define a performance cost relative to the operational objective, and pick an operator that minimizes the performance cost. Validation ipso facto plays a role because the scientific model needs to be validated. With complex systems, or those for which experiments are costly, there may be insufficient data for system identification, with validation being outside the realm of possibility. Given the resulting model uncertainty, the best one can do is to design a “robust” operator that is optimal relative to both the objective and the uncertainty. This robust optimization paradigm entails optimal experimental design: should one not be satisfied with the performance, choose the experiment that maximally reduces the uncertainty as it pertains to the objective. In this chapter, we address these problems and present examples in the context of gene regulatory network intervention.
KeywordsBayesian experimental design Robust design Mean absolute cost of uncertainty Uncertainty quantification
- Bernardo, J. M., & Smith, A. F. (2001). Bayesian theory. Measurement Science and Technology, 12(2), 221.Google Scholar
- Huan, X., & Marzouk, Y. M. (2016). Sequential Bayesian optimal experimental design via approximate dynamic programming. arXiv preprint arXiv:1604.08320.
- Kaufmann, S. (1993). The origins of order. New York: Oxford University Press.Google Scholar