Multi-objective Optimization of Long-run Average and Total Rewards

This paper presents an efficient procedure for multi-objective model checking of long-run average reward (aka: mean pay-off) and total reward objectives as well as their combination. We consider this for Markov automata, a compositional model that captures both traditional Markov decision processes (MDPs) as well as a continuous-time variant thereof. The crux of our procedure is a generalization of Forejt et al.’s approach for total rewards on MDPs to arbitrary combinations of long-run and total reward objectives on Markov automata. Experiments with a prototypical implementation on top of the Storm model checker show encouraging results for both model types and indicate a substantial improved performance over existing multi-objective long-run MDP model checking based on linear programming.


Introduction
MDP model checking In various applications, multiple decision criteria and uncertainty frequently co-occur. Stochastic decision processes for which the objective is to achieve multiple-possibly partly conflicting-objectives occur in various fields. These include operations research, economics, planning in AI, and game theory, to mention a few. This has stimulated model checking of Markov decision processes (MDPs) [46], a prominent model in decision making under uncertainty, against multiple objectives. This development enlarges the rich plethora of automated MDP verification algorithms against single objectives [7].
Multi-objective MDP Various types of objectives known from conventionalsingle-objective-model checking have been lifted to the multi-objective case. These objectives range over ω-regular specifications including LTL [26,27], expected (discounted and non-discounted) total rewards [21,27,28,52,22], stepbounded and reward-bounded reachability probabilities [28,35], and-most relevant for this work-expected long-run average (LRA) rewards [18,11,20], also known as mean pay-offs. For the latter, all current approaches build upon linear programming (LP) which yields a theoretical time-complexity polynomial in the model size. However, in practice, LP-based methods are often outperformed by approaches based on value-or strategy iteration [28,1,42]. The LP-based approach of [27] and the iterative approach of [28] are both implemented in PRISM [45] and Storm [40]. The LP formulation of [11,20] is implemented in MultiGain [12], an extension of PRISM for multi-objective LRA rewards.

Contributions of this paper
We present a computationally efficient procedure for multi-objective model checking of LRA reward and total reward objectives as well as their mixture. The crux of our procedure is a generalization of Forejt et al.'s iterative approach [28] for total rewards on MDPs to expected LRA reward objectives. In fact, our approach supports the arbitrary mixtures of expected LRA and total reward objectives. To our knowledge, such mixtures have not been considered so far. Experiments on various benchmarks using a prototypical implementation in Storm indicate that this generalized iterative algorithm outperforms the LP approach implemented in MultiGain.
In addition, we extend this approach towards Markov automata (MA) [25,23], a continuous-time variant of MDP that is amenable to compositional modeling. This model is well-suited, among others, to provide a formal semantics for dynamic fault trees and generalized stochastic Petri nets [24]. Our multiobjective LRA approach for MA builds upon the value-iteration approach for single-objective expected LRA rewards on MA [17] which-on practical modelsoutperforms the LP-based approach of [30]. To the best of our knowledge, this is the first multi-objective expected LRA reward approach for MA. Experimental results on MA benchmarks show that the treatment of a continuous-time variant of LRA comes at almost no time penalty compared to the MDP setting.
Other related work Mixtures of various other objectives have been considered for MDPs. This includes conditional expectations or ratios of reward functions [5,4]. [31] considers LTL formulae with probability thresholds while maximizing an expected LRA reward. [35,41] address multi-objective quantiles on reachability properties while [50,20] consider multi-objective combinations of percentile queries on MDP and LRA objectives. [6] treats resilient systems ensuring constraints on the repair mechanism while maximizing the expected LRA reward when being operational. The trade-off between expected LRA rewards and their variance is analyzed in [13]. [33] studies multiple objectives on interval MDP, where transition probabilities can be specified as intervals in cases where the concrete probabilities are unknown. Multiple LRA reward objectives for stochastic games have been treated using LP [19] and value iteration over convex sets [8,9]; the latter is included in PRISM-games [44,43]. These approaches can also be applied to MDPs when viewed as one-player stochastic games. Algorithms for single-objective model checking of MA deal with objectives such as expected total rewards, time-bounded reachability probabilities, and expected long-run average rewards [38,29,30,15]. The only multi-objective approach for MA so far [47] shows that any method for multi-objective MDP can be applied on (a discretized version of) an MA for queries involving unbounded or time-bounded reachability probabilities and expected total rewards, but no long-run average rewards.

Preliminaries
The set of probability distributions over a finite set Ω is given by denote the non-negative, positive, and extended real numbers, respectively. For a point p = p 1 , . . . , p ∈ R , ∈ N and i ∈ {1, . . . , } we write p i for its i th entry p i . For p, q ∈ R let p·q denote the dot product. We further write p ≤ q iff ∀ i : p i ≤ q i and p q iff p ≤ q ∧ p = q. The closure of a set P ⊆ R is the union of P and its boundary, denoted by cl (P ). The convex hull of P is given by The downward convex hull of P is given by dwconv (P ) = q ∈ R | ∃ p ∈ conv (P ): q ≤ p .
is a probability function that assigns a distribution over possible successor states for each Markovian state and enabled state-action pair.
Let M = S, Act , Δ, P be an MA. If M is clear from the context, we may omit the superscript from MS M , PS M , SA M , and further notations introduced below. Intuitively, the time M stays in a Markovian state s ∈ MS is governed by an exponential distribution with rate Δ(s) ∈ R >0 , i.e., the probability to take a transition from s within t ∈ R ≥0 time units is 1 − e −Δ(s)·t . Upon taking a transition, a successor state s ∈ S is drawn from the distribution P(s), i.e., P(s)(s ) is the probability that the transition leads to s ∈ S. For probabilistic stateŝ s ∈ PS , an enabled action α ∈ Δ(ŝ) has to be picked and a successor state is drawn from P( ŝ, α ) (without any delay). Nondeterminism is thus only possible at probabilistic states. We assume deadlock free MA, i.e., ∀ s ∈ PS M : Δ(s) = ∅.

Remark 1.
To enable more flexible modeling such as parallel compositions, the literature (e.g., [25,30]) often considers a more liberal variant of MA where (i) different successor distributions can be assigned to the same state-action pair and (ii) states can be both, Markovian and probabilistic. MAs as in Definition 1also known as closed MA-are equally expressive: they can be constructed via action renaming and by applying the so-called maximal progress assumption [25].
and P C is the restriction of P to C.
A strategy for M resolves the nondeterminism at probabilistic states by providing probability distributions over enabled actions based on the execution history.
A strategy σ is called memoryless if the choice only depends on the current state, i.e., ∀π,π ∈ Paths fin : last(π) = last(π ) implies σ(π) = σ(π ). If all assigned distributions are Dirac, σ is called deterministic. Let Σ M and Σ M md denote the set of general and memoryless deterministic strategies of M, respectively. For simplicity, we often interpret σ ∈ Σ M md as a function σ : S → Act ∪ {τ }. The induced sub-MA for σ ∈ Σ M md is given by M MS ∪ { s, σ(s) | s ∈ PS } . Strategy σ ∈ Σ M and initial state s I ∈ S define a probability measure Pr M,sI σ that assigns probabilities to sets of infinite paths [38]. The expected value of f : Paths inf →R is given by the Lebesque integral Ex M,sI

Reward-based Objectives
MA can be equipped with rewards to model various quantities like, e.g., energy consumption or the number of produced units. We distinguish between transition rewards R trans : MS ∪ SA × S → R that are collected when transitioning from one state to another and state rewards R state : S → R that are collected over time, i.e., staying in state s for t time units yields a reward of R state (s) · t. Since no time passes in probabilistic states, state rewards R state (s) for s ∈ PS are not relevant. A reward assignment combines the two notions.
Definition 4. A reward assignment for MA M and R state , R trans as above is a function R : We fix a reward assignment R for M. R can also be applied to any sub-MA The total reward objective for reward assignment R is given by tot(R): Paths inf →R with tot(R)(π) = lim sup k→∞ R(prefix steps (π, k)). Definition 6. The long-run average (LRA) reward objective for R is given by Sect. 4 considers assumptions under which the limit in both definitions can be attained, i.e., lim sup can be replaced by lim. The incorporation of other objectives such as reachability probabilities are discussed in Remark 3.

Markov Decision Processes
A Markov Decision Process (MDP) M is an MA with only probabilistic states, i.e., MS M = ∅. All notions above also apply to MDP. However, since all paths of an MDP have duration 0, there is no timing information available. For MDP, we therefore usually consider steps instead of time. In particular, for reward assignment R we consider lra steps (R) instead of lra(R), where lra steps (R)(π) = lim sup k→∞ 1 k · R(prefix steps (π, k)). Below, we focus on MA. Applying our results to step-based LRA rewards on MDPs is straightforward. Time-based LRA reward objectives for MA can not straightforwardly be reduced to step-based measures for MDP due to the interplay of delayed-and undelayed transitions.

Efficient Multi-objective Model Checking
We formalize common tasks in multi-objective model checking and sketch our solution method based on [28]. We fix an MA M = S, Act , Δ, P with initial state s I ∈ S and > 0 objectives f 1 , . . . , f : . . , f . The notation for expected values is lifted to tuples:

Multi-objective Model Checking Queries
Our aim is to maximize the expected value for each (potentially conflicting) objective f j . We impose the following assumption which can be asserted using single-objective model checking. We further discuss the assumption in Remark 2. Ach (F ) Figure 1: MA with achievable points and Pareto front for A point p ∈ Ach (F) is called achievable and there is a single strategy σ that for each objective f j achieves an expected value of at least p j . Due to Assumption 1, the Pareto front is the frontier of the set of achievable points, meaning that it is the smallest set P ⊆ R with dwconv (P ) = cl (Ach (F)). We can thus interpret Pareto (F) as a representation for cl (Ach (F)) and vice versa. The set of achievable points is closed iff all points on the Pareto front are achievable.
For multi-objective model checking we are concerned with the following queries: Multi-objective Model Checking Queries Qualitative Achievability: Given point p ∈ R , decide if p ∈ Ach (F).
Input : MA M with initial state sI , objectives F = f1, . . . , f Output : An approximation of Ach (F ) 1 P ← ∅ // Collects achievable points found so far. 2 Q ← R // Excludes points that are known to be unachievable. 3 Algorithm 1: Approximating the set of achievable points

Approximation of Achievable Points
A practically efficient approach that tackles the above queries for expected total rewards in MDP was given in [28]. It is based on so-called sandwich algorithms known from convex multi-objective optimization [53,51]. We extend the algorithm to arbitrary combinations of objectives f j on MA, including-and this is the main algorithmic novelty-mixtures of total-and LRA reward objectives.
The idea is to iteratively refine an approximation of the set of achievable points Ach (F). The refinement loop is outlined in Algorithm 1. At the start of each iteration, the algorithm chooses a weight vector w and a precision parameter ε after some heuristic (details below). Then, Line 5, considers the weighted sum of the expected values of the objectives f j . More precisely, an upper bound v w for sup {w · Ex σ (F) | σ ∈ Σ } as well as a "near optimal" strategy σ w need to be found such that the difference between the bound v w and the weighted sum induced by σ w is at most ε. In Sect. 4, we outline the computation of v w and σ w for the case where F consists of total-and LRA reward objectives. Next, in Line 6 the algorithm computes a point p w that contains the expected values for each individual objective f j under strategy σ w . These values can be computed using off-the-shelf single-objective model checking algorithms on the model induced by σ w . By definition, p w is achievable. Finally, Line 7 inserts the found point into the initially empty set P and excludes points from the set Q (which initially contains all points) that are known to be unachievable. The following theorem establishes the correctness of the approach. We prove it using Lemmas 1 and 2.
Proof. We need to show that for two points p 1 , p 2 ∈ Ach (F) with achieving strategies σ 1 , σ 2 ∈ Σ , any point p on the line connecting p 1 and p 2 is also achievable. Formally, for w ∈ [0, 1] show that p w = w ·p 1 +(1−w) ·p 2 ∈ Ach (F). Consider the strategy σ w that initially makes a coin flip 1 : With probability w it mimics σ 1 and otherwise it mimics σ 2 . We can show for all objectives f j : We now show Theorem 1. A similar proof was given in [28].
Algorithm 1 can be stopped at any time and the current approximation of Ach (F) can be used to (i) decide qualitative achievability, (ii) provide a lower and an upper bound for quantitative achievability, and (iii) obtain an approximative representation of the Pareto front.
The precision parameter ε can be decreased dynamically to obtain a gradually finer approximation. If Ach (F) is closed, the supremum sup {w · Ex σ (F) | σ ∈ Σ } can be attained by some strategy σ w , allowing us to set ε = 0.
We briefly sketch the selection of weight vectors as proposed in [28]. In the first iterations of Algorithm 1, we optimize each objective f j individually, i.e., we consider for all j the weight vector w with w i = 0 for i = j and w j = 1. After that, we consider weight vectors that are orthogonal to a facet of the downward convex hull of the current set of points P . To approximate the Pareto front, facets with a large distance to R \ Q are considered first. To answer a qualitative or quantitative achievability query, the selection can be guided further based on the input point p ∈ R or the input values p 2 , p 3 , . . . , p ∈ R. More details and further discussions on these heuristics can be found in [28].
Remark 2. Assumption 1 does not exclude Ex σ (f j ) = −∞ which occurs, e.g., when objectives reflect resource consumption and some (bad) strategies require infinite resources. Moreover, if Assumption 1 is violated for an objective f j we observe that for this objective, any (arbitrarily high) value p ∈ R can be achieved with some strategy σ ∈ Σ such that p ≤ Ex σ (f j ). Similar to the proof of Lemma 2, a strategy can be constructed that-with a small probabilitymimics a strategy inducing a very high expected value for f j and-with the remaining (high) probability-optimizes for the other objectives. Let F −j be the tuple F without f j and similarly for p ∈ R let p −j ∈ R −1 be the point p without the j th entry. Assuming inf {Ex σ (f j ) | σ ∈ Σ } > −∞, we can show that cl (Ach (F)) = p ∈ R | p −j ∈ cl (Ach (F −j )) . Put differently, cl (Ach (F)) can be constructed from the achievable points obtained without the objective f j .

Optimizing Weighted Combinations of Objectives
We now analyze weighted sums of expected values as in Line 5 of Algorithm 1.

Weighted Sum Optimization Problem
Input: MA M with initial state s I , objectives F = f 1 , . . . , f , We only consider total-and LRA reward objectives. Remark 3 discusses other objectives. We show that instead of a weighted sum of the expected values we can consider weighted sums of the rewards. This allows us to combine all objectives into a single reward assignment and then apply single-objective model checking.

Pure Long-run Average Queries
Initially, we restrict ourselves to LRA objectives and show a reduction of the weighted sum optimization problem to a single-objective long-run average reward computation. As usual for MA [38,29,17] we forbid so-called Zeno behavior.
The assumption is equivalent to assuming that every EC of M contains at least one Markovian state. If the assumption holds, the limit in Definition 6 can be attained almost surely (with probability 1) and corresponds to a value v ∈ R. Thus, Assumption 1 for LRA objectives is already implied by Assumption 2. Let F lra = lra(R 1 ), . . . , lra(R ) with reward assignments R j . Moreover, for weight vector w let R w be the reward assignment with R w ( s, κ , s ) = j=1 w j · R j ( s, κ , s ).
Due to Theorem 2, it suffices to consider the expected LRA reward for the single reward assignment R w . The supremum sup {Ex σ (lra(R w )) | σ ∈ Σ } is attained by some memoryless deterministic strategy σ w ∈ Σ md [30]. Such a strategy and the induced value v w = Ex σw (lra(R w )) can be computed (or approximated) with linear programming [30], strategy iteration [42] or value iteration [17,1].

A Two-phase Approach for Single-objective LRA
The computation of single-objective expected LRA rewards for reward assignment R w can be divided in two phases [29,17,1].
if c = C, s, α for C ∈ C and s, α ∈ exits(C) Intuitively, selecting action ⊥ at a state C ∈ MECS (M) in M reflects any strategy of M that upon visiting the EC C will stay in this EC forever. We can thus mimic any strategy of the sub-MA M C , in particular a memoryless deterministic strategy that maximizes the expected value of lra(R w ) in M C . Contrarily, selecting an action s, α at a state C of M reflects a strategy of M that upon visiting the EC C enforces that the states of C will be left via the exiting state-action pair s, α . Let R * be the reward assignment for M that yields R * ( C, ⊥ , s ⊥ ) = v C and 0 in all other cases. It can be shown that max{Ex M,sI The maximal total reward in M can be computed using standard techniques such as value iteration and policy iteration [46] as well as the more recent sound value iteration and optimistic value iteration [48,36]. The latter two provide sound precision guarantees for the output value v, i.e., |v − max{Ex M ,s I σ (tot(R * )) | σ ∈ Σ M }| ≤ ε for a given ε > 0.
The above-mentioned procedure for LRA reduces the analysis to an expected total reward computation on the quotient model M \MECS (M) . This approach suggests to also incorporate other total-reward objectives for M in the quotient model. However, special care has to be taken concerning total rewards collected within ECs of M that would no longer be present in the quotient M \MECS (M) . We discuss how to deal with this issue by considering the quotient only for ECs in which no (total) reward is collected. We start with restricting the (total) rewards that might be assigned to transitions within EC.

Assumption 3 (Sign-Consistency) For all total reward objectives
The assumption implies that paths on which infinitely many positive and infinitely many negative reward is collected have probability 0. One consequence is that the limit in Definition 5 exists for almost all paths [3]. A discussion on objectives tot(R j ) that violate Assumption 3 for single-objective MDP is given in [3]. Their multi-objective treatment is left for future work.
When Assumptions 1 and 3 hold, we get R j (C) ≤ 0 for all objectives tot(R i ) and EC C. Put differently, all non-zero total rewards collected in an EC have to be negative. Strategies that induce a total reward of −∞ for some objective tot(R i ) will not be taken into account for the set of achievable points. Therefore, transitions within ECs that yield negative reward should only be taken finitely often. These transitions can be disregarded when computing the expected LRA rewards, i.e., only the 0-ECs [3] are relevant for the LRA computation.  We are ready to describe our approach that combines LRA rewards of 0-ECs and the remaining total rewards into a single total-reward objective. Let R tot w and R lra w be reward assignments with R tot w ( s, κ , s ) = k i=1 w i · R i ( s, κ , s ) and R lra w ( s, κ , s ) = j=k w j · R j ( s, κ , s ). Moreover, for π ∈ Paths inf we set (tot(R tot w ) + lra(R lra w ))(π) = tot(R tot w )(π) + lra(R lra w )(π). Theorem 3. ∀ σ ∈ Σ : w · Ex σ (F) = Ex σ (tot(R tot w ) + lra(R lra w )).
Proof. Using a similar reasoning as in the proof of Theorem 2, we get: Algorithm 2 outlines the procedure for solving the weighted sum optimization problem. It first computes optimal LRA rewards and inducing strategies for each maximal 0-EC (Lines 1 to 3). Then, a quotient model M * and a reward assignment R * incorporating all total-and LRA rewards is build and analyzed (Lines 4 to 6). M * might still contain ECs other than {s ⊥ }. Those ECs shall be left eventually to avoid collecting infinite negative reward for a total reward objective tot(R i ). Note that the weight w i for such an objective might be zero, Input : MA M with initial state sI , objectives F = tot(R1), . . . , tot(R k ), lra(R k+1 ), . . . , lra(R ) , weight vector w Output : Value vw, strategy σw as in the weighted sum optimization problem 1 C ← MECS 0(M, R1, . . . , Ri ) // Compute maximal 0-ECs and their LRA.
and inducing strategy σC ∈ Σ M C md 4 M * ← M \C // Build and analyze quotient model. 5 Build reward assignment R * with Algorithm 2: Optimizing the weighted sum for total and LRA objectives i.e., the rewards of R i are not present in R * . It is therefore necessary to explicitly restrict the analysis to strategies that almost surely (i.e., with probability 1) reach s ⊥ . To compute the maximal expected total reward in Line 6 with, e.g., standard value iteration, we can consider another quotient model for M * and the 0-ECs of M * and R * . In contrast to Definition 8, this quotient should not introduce the ⊥ action since it shall not be possible to remain in an EC forever. In Line 7, the strategies for the 0-ECs and for the quotient M * are combined into one strategy σ w for M. Here, σ C♦s refers to a strategy of M C for which every state s ∈ states(C) eventually reaches s ∈ states(C) almost surely. Since Algorithm 2 produces a memoryless deterministic strategy σ w , the point p w ∈ R in Line 6 of Algorithm 1 can be computed on the induced sub-MA for σ w . Assuming exact single-objective solution methods, the resulting value v w and strategy σ w ∈ Σ M md of Algorithm 2 satisfy v w = w · Ex σw (F), yielding an exact solution to the weighted sum optimization problem. As the number of memoryless deterministic strategies is bounded, we conclude the following, extending results for pure LRA queries [11] to mixtures with total rewards. Remark 3. Our framework can be extended to support objectives beyond totaland LRA rewards. Minimizing objectives where one is interested in a strategy σ that induces a small expected value can be considered by multiplying all rewards with −1. Since we already allow negative values in reward assignments, no further adaptions are necessary. We emphasize that our framework lifts a restriction imposed in [28] that disabled a simultaneous analysis of maximizing and minimizing total reward objectives. Reachability probabilities can be transformed to expected total rewards on a modified model in which the information whether a goal state has already been visited is stored in the state-space. Goal-bounded total rewards as in [30], where no further rewards are collected as soon as one of the goal states is reached can be transformed similarly. For MDP, step-and rewardbounded reachability probabilities can be converted to total reward objectives by unfolding the current amount of steps (or rewards) into the state-space of the model. Approaches that avoid such an expensive unfolding have been presented in [28] for objectives with step-bounds and in [34,35] for objectives with one or multiple reward-bounds. Time-bounded reachability probabilities for MA have been considered in [47]. Finally, ω-regular specifications such as linear temporal logic (LTL) formulae have been transformed to total reward objectives in [27]. However, the optimization of LRA rewards within the ECs of the model might interfere with the satisfaction of one or more ω-regular specifications [31].

Experimental Evaluation
Implementation details Our approach has been implemented in the model checker Storm [40]. Given an MA or MDP (specified using the PRISM language or JANI [14]), the tool answers qualitative-and quantitative achievability as well as Pareto queries. Beside of mixtures of total-and LRA reward objectives, Storm also supports most of the extensions in Remark 3-with the notable exception of LTL. We use LRA value iteration [17,1] and sound value iteration [48] for calls to single-objective model checking. Both provide sound precision guarantees, i.e., the relative error of these computations is at most ε, where we set ε = 10 −6 .
Workstation cluster To showcase the capabilities of our implementation, we present a workstation cluster-originally considered in [39] as a CTMC-now modeled as an MA. The cluster considers two sub-clusters each consisting of one switch and N workstations. Within each sub-cluster the workstations are connected to the switch in a star topology and the two switches are connected with a backbone. Each of the components may fail with a certain rate. A controller can (i) acquire additional repair units (up to M ) and (ii) control the movements of the repair units. In Fig. 2a we depict the resulting sets of achievable points-as computed by our implementation-for N = 16 and M = 4. As objectives, we considered the long-run average number of operating workstations lra(R #op ), the long-run average probability that at least N workstations are operational lra(R #op≥N ), and the total number of acquired repair units tot(R #rep ). [12] is an extension of PRISM [45] that implements the LP-based approach of [11] for multiple LRA objectives on MDP to answer  qualitative and quantitative achievability as well as Pareto queries. For the latter, it is briefly mentioned in [12] that ideas of [28] were used similar to our approach but no further details are provided. MultiGain does not support MA, mixtures with total reward objectives, and Pareto queries with > 2 objectives. However, it does support more general quantitative achievability queries. PRISM-games [44,43] implements value iteration over convex sets [8,9] to analyze multiple LRA reward objectives on stochastic games (SGs). By converting MDPs to 1-player SGs, PRISM-games could also be applied in our setting. However, some experiments on 1-player SGs indicated that this approach is not competitive compared to the dedicated MDP implementations in MultiGain and Storm. We therefore do not consider PRISM-games in our evaluation.

Related tools MultiGain
Benchmarks We consider 10 different case studies including the workstation cluster (clu) as well as benchmarks from QVBS [37] (dpm, rqs, res), from Multi-Gain [12] (mut, phi, vir), from [42] (csn, sen), and from [47] (pol). For each case study we consider 3 concrete instances resulting in 12 MAs and 18 MDPs. The analyzed objectives range over LRA rewards, (goal-bounded) total rewards, and time-, step-and unbounded reachability probabilities.     Discussion As indicated in Fig. 2b, our implementation outperforms MultiGain on almost all benchmarks and for all types of queries and is often orders of magnitude faster. According to MultiGain's log files, the majority of its runtime is spend for solving LPs, suggesting that the better performance of Storm is likely due to the iterative approach presented in this work. Table 1 shows that pure LRA queries on models with millions of states can be handled. There were no significant runtime gaps between MA and MDP models. For csn, the increased number of objectives drastically increases the overall runtime. This is partly due to our naive implementation of the geometric set representations used in Algorithm 1. Table 2 indicates that the performance and scalability for mixtures of LRA and other types of objectives is similar. One exception are queries involving time-bounded reachability on MA (e.g., dpm). Here, our implementation is based on the single-objective approach of [29] that is known to be slower than more recent methods [16,15].

Set-up
Data availability The implementation, models, and log files are available at [49].

Conclusion
The analysis of multi-objective model checking queries involving multiple longrun average rewards can be incorporated into the framework of [28] enabling (i) the use of off-the-shelf single-objective algorithms for LRA and (ii) the combination with other kinds of objectives such as total rewards. Our experiments indicate that this approach clearly outperforms existing algorithms based on linear programming. Future work includes lifting the approach to partially observable MDP and stochastic games, potentially using ideas of [10] and [2], respectively.