It is an established tradition of the Journal of Discrete Event Dynamic Systems (J-DEDS) to publish every two years a special issue devoted to advances in the area of Discrete Event Systems (DES). For this issue, which was announced in 2016, we selected two broad topics that are central within our research community, namely performance analysis and optimization and diagnosis, opacity and supervisory control. New original contributions in these two areas have recently appeared and it was our goal to take stock of the state of the art and explore new research directions that may be fruitfully pursued over the next several years. After consultation with colleagues following the last International Workshop on DES (WODES 2016, May 2016 in Xian, China), we invited seventeen groups of authors to submit papers for these two special issues. These papers were reviewed according to the normal review process of J-DEDS.

The first of the two special issues, entitled “Performance Analysis and Optimization of Discrete Event Systems,” contains six papers covering recent developments in discrete-event-based methodologies for Markov chains and Markov decision processes, simulation-based optimization and new approaches for the performance regulation of DES. A brief description of each paper in this special issue follows.

The first paper in the issue, “Ranking nodes in general networks: a Markov multi-chain approach” by Berkhout and Heidergott, develops a new Markov chain-based methodology for a meaningful ranking of nodes in a network which enhances Google’s well-established PageRank algorithm.

In the next paper, “Solving a class of simulation-based optimization problems using ‘optimality in probability,’” Mao and Cassandras propose a new optimality criterion using optimality in probability instead of the usual optimality in expectation, an approach favoring the solution whose actual performance is more likely better than that of any other solution and offering a complementary approach to traditional optimality, especially in dynamic and nonstationary environments.

The third paper, “Variance minimization of parameterized Markov decision processes” by Xia, explores the variance minimization problem in Markov decision processes with parameterized policies, which is more challenging to analyze compared to traditional average or discounted criteria. An iterative algorithm is derived to reduce the reward variance and is shown to converge to a local optimal policy.

The paper “Opacity for linear constraint Markov chains” by Bérard, Kouchnarenko, Mullins, and Sassolas considers specifications given as Markov chains which are underspecified in the sense that transition probabilities are required to belong to some set. In such cases, opacity is defined through an appropriate worst case measure which can be computed or approximated for a class of linear Markov chains of this type and shown to improve opacity.

In “Applications of generalized likelihood ratio method to distribution sensitivities and steady-state simulation,” Lei, Peng, Fu, and Hu provide applications of the generalized likelihood ratio method to distribution sensitivity estimation for both finite-horizon and steady-state simulation and offer a framework that uniformly treats a number of related sensitivity estimation problems.

Finally, the sixth paper, “Instruction-throughput regulation in computer processors with data-center applications” by Chen, Wardi, and Yalamanchili, considers a recent approach for regulating output performance of DES by estimating in real time the derivative of a plant function through infinitesimal perturbation analysis methods. The approach is applied to control the instruction throughput in various industry program-benchmarks and data center settings.

Guest Editors

Christos G. Cassandras and Alessandro Giua