Abstract
Implementing realistic activity patterns for a population is crucial for modeling, for example, disease spread, supply and demand, and disaster response. Using the dynamic activity simulation engine, DASim, we generate schedules for a population that capture regular (e.g., working, eating, and sleeping) and irregular activities (e.g., shopping or going to the doctor). We use the sample entropy (SampEn) statistic to quantify a schedule’s regularity for a population. We show how to tune an activity’s regularity by adjusting SampEn, thereby making it possible to realistically design activities when creating a schedule. The tuning process sets up a computationally intractable highdimensional optimization problem. To reduce the computational demand, we use Bayesian Gaussian process regression to compute global sensitivity indices and identify the parameters that have the greatest effect on the variance of SampEn. We use the harmony search (HS) global optimization algorithm to locate global optima. Our results show that HS combined with global sensitivity analysis can efficiently tune the SampEn statistic with few search iterations. We demonstrate how global sensitivity analysis can guide statistical emulation and global optimization algorithms to efficiently tune activities and generate realistic activity patterns. Though our tuning methods are applied to dynamic activity schedule generation, they are general and represent a significant step in the direction of automated tuning and optimization of highdimensional computer simulations.
Introduction
Highdimensional computer models for simulating real world phenomena have many variables and present a difficult challenge in understanding the relationship between input and output. Known as the curse of dimensionality, a full space analysis of the nature of inputoutput relationships is NPcomplete, scaling exponentially as s ^{n}, where s is the number of sample values for each of the n input variables (Rabitz and Aliş 1999). This paper presents an efficient method for determining these inputoutput relationships in highdimensional models using a combination of global optimization and global sensitivity analysis. We demonstrate our method using a model of human activity and movement.
Human activity and movement patterns are complex and notoriously difficult to model (Berry et al. 2002). Large variations in movement patterns stem from demographic, geographic, and temporal differences. Quantifying the effects of these differences on human activity/schedules provides a difficult but important challenge (González et al. 2008). Realistic human activity and movement models are fundamental components for agentbased infrastructure simulations. These models use human activity patterns to simulate complex systems including epidemics (Eubank et al. 2004; Colizza et al. 2007; Mniszewski et al. 2008; Stroud et al. 2007), traffic (Kitamura et al. 2000), and natural disaster response (Pan et al. 2007). Despite their importance, models typically simplify the complexity of human movement and rely on estimates such as static activity patterns. The static approach results in a Groundhog Daylike effect, where every person performs the same activities day in and day out according to a fixed schedule. Since the schedule cannot be modified based on exogenous events, the schedule will inevitably repeat over some finite time scale.
The level of realism required in a model of natural phenomenon depends upon the scenario being modeled and the questions being addressed (Burton 2003; Burton and Obel 1995). In epidemic modeling, capturing emergent human behavior is crucial for accurately forecasting the spread of disease and the impact of mitigation strategies. Similarly, for modeling disaster response during a natural or manmade event, understanding people’s activities before and after the event will help emergency responders allocate resources. Finally, supply and demand modeling of various utilities (e.g., water, electricity, and communications) depends on the population’s activities as they move throughout the day. Therefore, capturing realistic activity patterns can help improve modeling efforts and save lives during emergencies.
We have built on the previous body of activity pattern research. Germann et al. presented a study analyzing mitigations for a pandemic influenza in the United States (Germann et al. 2006). In their study, 12hour schedules were cycled to direct the activities of seven different mixing groups consisting of work, school, day care, play group, neighborhood, neighborhood clusters, and communities. Paleshi et al. performed a similar study that featured stricter mixing patterns according to four coarselydefined demographic groups: preschool children (ages <1–4), school children (5–18), adults (19–64), and seniors (≥65) (Paleshi et al. 2011). In Stroud et al. (2007) and Mniszewski et al. (2008), epidemic simulations rely on static schedules with individuals cycling through nine different activities. Additionally, in contrast to other studies, individuals temporarily deviate from their schedules when ill, and parents stay home with sick children. Weekday schedules were further distinguished from weekends and holidays by replacement of work or school with home for a portion of the population in a study of social contact patterns and their effect on the spread of disease (Del Valle et al. 2008); it was found that the lack of weekday and weekend activities can greatly overestimate the impact of disease spread. Brockmann et al. moved beyond the realm of staticbased schedules by considering random walks as a proxy for human movement based on trajectories of almost 500 thousand dollar bills (Brockmann et al. 2006). González et al. studied the paths of 100 thousand mobile phone users and showed that humans do not behave randomly; rather, they follow simple reproducible spatial patterns (González et al. 2008). All of these models neglect basic human characteristics based on desire, need, and importance that can impact and change schedules accordingly (e.g., getting sick may force a person to go home early from work, or car maintenance may preclude shopping). A realistic human activity and movement model needs to dynamically take these basic human traits into account (Macy and Willer 2002).
We use the Dynamic Activity Simulator (DASim), previously known as ActivitySim (Galli et al. 2009), that incorporates activity utility and priority to develop schedules for a population of individuals. DASim generates schedules that give each individual close to the maximal utility that complements their priorities for activities. This allows one to design population schedules by specifying priorities and utilities of a variety of activities for any number of demographic groups. Moreover, new schedules can be generated dynamically during a simulation.
Once these schedules are dynamically generated, it is not immediately apparent if they are realistic for a population. In actual populations, we expect demand hours (i.e., the total number of people participating in an activity aggregated over one hour) for certain activities, such as grocery shopping or working, to be stable on any given weekday. For recreation activities or hospital visits, we expect daily demand hours to fluctuate, with possibly a more stable amount of demand hours on a monthly or quarterly timescale. In this way, regularity of demand hours can be required in population’s schedules to classify traits of certain activities, thus adding realism to the dynamic schedule generation. We propose to quantify an activity schedule’s regularity using the sample entropy (SampEn) statistic (Richman and Moorman 2000). That is, the SampEn of the time series associated with DASim output is used to dynamically adjust schedules to be consistent with regular and irregular activity patterns. By tuning SampEn, one can design schedules comprised of activities that occur with a desired level of regularity.
Tuning the SampEn statistic for a schedule can be posed as a highdimensional optimization problem. Global sensitivity analysis can be used to reduce the dimensionality of the optimization problem by targeting the input parameters in DASim that control the majority of variation in SampEn. The sensitivity analysis was carried out efficiently through the use of Bayesian Gaussian process regression. Once a lowdimensional set of influential parameters is discovered, a global optimization scheme, harmony search (HS) (Geem et al. 2001), is used to tune SampEn and therefore adjust the regularity of activities in a schedule. We demonstrate that reducing the search space for HS to only influential parameters results in a more efficient search.
Methods
Dynamic activities model
DASim is a dynamic parallel agentbased discrete event movement and activity simulator. DASim requires two components to generate schedules: (1) a population with demographic characteristics, and (2) locations with geographic coordinates. DASim can use any population and location data, but the synthetic population we use is based on U.S. census data^{Footnote 1} and includes various demographic characteristics such as age, gender, income, and status (e.g., worker, student, and stay home). In addition, each person has a household consistent with the census data. Locations are derived from the Dun & Bradstreet business directory database,^{Footnote 2} which include addresses and business type. Businesses can be aggregated in a geographic area and may include multiple business types such as a shopping mall. DASim integrates all this information to generate realistic schedules according to the person’s preferences and needs.
Activities are defined based on the scenarios of interest. For example, they can be general (e.g., home, work, school, shop, social recreation), more specific (e.g., sleep, personal care, breakfast, lunch, food shopping, morning work, afternoon work), or mixed. Subsets of activities are stratified based on different demographic characteristics such as age, school and worker status, and/or gender. Some examples include children (0–5 years old), youth (6–18 years old), workers (19–64 years old), and seniors (65+ years old). In DASim, each demographic group is assigned an activity set comprised of various allowed activities as demonstrated in Table 1. Each activity in each set has associated constraints, a utility function, and a priority function. These controls provide the ability to finely tune activities for each population group.
DASim’s utility and priority functions govern activity benefit and importance, respectively. In practice, utility functions influence activity duration, and priority functions influence the order in which activities occur. Utility increases up to a limiting or maximum useful duration. Priority indicates how often an activity is scheduled given the longest possible time between activity executions. So, the utility is a function of activity duration, d≥0, while priority is a function of activity start time, t≥0. Utility (U) and priority (P) functions are represented in DASim by the sigmoid function presented in (Joh et al. 2001),
where α _{{u,p}}, β _{{u,p}}, and γ _{{u,p}} are activityspecific parameters that determine the function’s offset, slope, and inflection point, respectively. Table 2 describes in more detail how these parameters affect utility and priority. For a more detailed analysis of these parameters and what they mean in practice, the reader is referred to Joh et al. (2001). U and P vary over the interval [0,1]. To change a dynamicallygenerated schedule in DASim, we vary the six parameters (α _{ u },β _{ u },γ _{ u },α _{ p },β _{ p },γ _{ p }) for each activity and for each demographic group. Figure 1 demonstrates sample utility and priority functions for several activities using different parameter sets.
A schedule is defined as a set of activities, where each activity has a specified minimum and maximum duration, start and end window, utility and priority functions, maximum travel time, and probability the activity will be performed on a weekend. Activities are scheduled in windows of time (e.g., a 24hour window means that each person schedules activities 24 hours in advance). A schedule s ^{∗} is generated by maximizing an objective function that balances the utility of an activity against the priority of all activities and the time it takes to travel to each activity in order to rank schedules,
Here, s is a schedule in the set of all possible schedules S, N is the number of activities in schedule s, \(U_{a_{i}}(d_{i})\) is the utility of activity a _{ i } of duration d _{ i }, C is the priority multiplier, B is the number of all possible activities from which the agent can choose, \(P_{a_{i}}(t_{r})\) is the priority of activity a _{ i } at time t _{ r }, D is the travel time multiplier, \(TT_{a_{\mathrm{max}}}\) is the maximum travel time for activity a, and \(T_{a_{i}}\) is the travel time for activity a _{ i }.
The two parameters in the objective function, C and D, weigh the importance of the priority function and travel time constraints, respectively. C and D are global parameters and apply equally to all activities for all demographic groups. The three parameters in each of the utility and priority functions are local parameters set on a peractivity basis.
Schedules are designed using the local search metaheuristic (Lourenço et al. 2003), similar to the method used in Joh et al. (2001). The local search algorithm iteratively adds new activities to a schedule or randomly selects an operator to apply to the schedule from the operators presented in Table 3. Activities are selected randomly from a set specific to each demographic group with probability weighted by priority. Activity duration is chosen using the specified time constraints. Travel distance is calculated relative to the location of the previous activity and is calculated as Euclidean distance, not as road or travel distance. To calculate travel time, we divide the travel distance by the average speed (fixed at 16 m/s). The objective function is used to evaluate proposed schedule changes. The schedule for a time window is complete when full (i.e., when there is no unaccountedfor time in the individual’s scheduling window) and a fixed number of optimization iterations have been completed. In our experiments, we use 10 iterations of the local search algorithm during the optimization step. A larger number of optimization steps allows local search to design slightly better schedules, but this comes at the cost of increased compute time. Figure 2 presents a diagram describing the local search process.
In this study, we concentrate on a randomlygenerated 10person test population. Each of the 10 people in the test population is allowed to create schedules from an activity set comprised of two activities. The first activity is allowed to be between 1 and 24 hours long (allowing for a variety of short or longduration activities, such as personal care, shopping, and medical appointments). The second activity is set to be between 4 hours and 10 hours (forcing longerduration activities, such as work, home, and sleep). The weekend factor for both activities is 1.0 (indicating that the activities are equally likely to occur during the weekend as they are during the week). The maximum travel time for each activity is fixed at 2 hours. Activities are allowed to start and end at any point during the day.
Sample entropy
Certain human activities occur with a high degree of regularity (e.g., working, going home), while others occur more erratically (e.g., medical treatment, social recreation) (Bhat et al. 2004; Kitamura and Hoorn 1987; Kitamura et al. 2006; Schlich and Axhausen 2003). Here, we develop a procedure to choose DASim parameters (α _{{u,p}},β _{{u,p}},γ _{{u,p}},C,D) that ensure spontaneity or regularity in an activity. We use the sample entropy (SampEn) statistic to detect regularity in a time series associated with a schedule.
SampEn was first introduced by Richman and Moorman (Richman et al. 2004; Richman and Moorman 2000) in response to Pincus’ seminal work on approximate entropy (ApEn) (Pincus 1991). Entropy quantifies the amount of order or disorder in a system. Ordered systems yield low entropy while disordered or chaotic systems yield high entropy. For a time series, this usually means that a low entropy system will have repeated changes or will remain constant, while a high entropy time series will have unpredictable changes that are highly variable. ApEn was originally developed to analyze regularity in medical and biological time series, specifically neonatal heart rates. It is still commonly used in medical literature (Goldberger et al. 2002; Hornero et al. 2005, 2006; Pincus and Goldberger 1994; Varela et al. 2003) and has also been applied to a variety of other fields including finance (Pincus and Kalman 2004) and human factors engineering (McKinley et al. 2011). SampEn improves on ApEn in several ways; most notably, it is a less biased statistic and requires about half the computing time (Richman and Moorman 2000).
SampEn computes the conditional probability that if a finite time series repeats itself within a tolerance r for m points, then it will also repeat itself for m+1 points, without allowing selfmatches (Lake et al. 2002). Small values of SampEn (values close to zero) indicate signal regularity (i.e., an ordered system), while relatively larger values indicate less regularity (i.e., a more disordered system). SampEn is still a comparative measure; there is no single threshold above which we may say that any arbitrary signal is irregular. It must be judged relative to the problem being addressed.
In our simulations, SampEn is used to quantify regularity of demand hours for activities on an hourly basis (i.e., m=1). It is common practice to set r equal to some fraction of the standard deviation (σ) of the data being analyzed, allowing measurements on datasets with different amplitudes (Richman and Moorman 2000); thus, we set r=0.2σ, where σ is computed from DASim’s demand hours output. We use the SampEn implementation written in C provided by PhysioNet.^{Footnote 3}
Global sensitivity analysis
We perform a global sensitivity analysis on the SampEn values computed from 12week DASim simulations with respect to the input parameters for the priority and utility functions. DASim outputs demand hours on an hourly basis for each activity, which represent the total number of people participating in an activity aggregated over one hour. For our 12week simulation period, DASim outputs 2,016 demand hour data points. Figure 3 shows a oneweek sample of DASim output (168 demand hour data points). Note how regularity is evident for home and work activities on a 24hour cycle.
We label two activities, \(\mathcal{A}_{1}\) and \(\mathcal{A}_{2}\), for our 10person population. For each activity, we define utility and priority functions as in (1) using parameter sets \((\alpha_{u_{1}}, \beta_{u_{1}}, \gamma_{u_{1}}, \alpha_{p_{1}}, \beta_{p_{1}}, \gamma _{p_{1}})\) for \(\mathcal{A}_{1}\) and \((\alpha_{u_{2}}, \beta_{u_{2}}, \gamma_{u_{2}}, \alpha_{p_{2}}, \beta_{p_{2}}, \gamma_{p_{2}})\) for \(\mathcal{A}_{2}\) along with global optimization parameters C and D. A SampEn value is computed for each activity from the DASim demand hours output. For brevity, we denote the set of inputs to a given schedule by:
The main notation used throughout the sensitivity analysis is as follows: we will refer to each of the variables in a given \(\theta\in\mathbb{R}^{14}\) using subscripts, θ _{ j } for j=1,2,3,…,14. Note that we will also be taking multiple samples of θ parameter sets to construct a statistical model of the SampEn. From M samples of θ parameter sets, we form the M×14 sample matrix Θ whose rows are the samples of the θ parameter sets. We will then use the notation Θ _{ i,j } to refer to the j ^{th} parameter in the i ^{th} sample with i=1,2,…,M and j=1,2,…,14. A single subscript will refer to a row of Θ, so Θ _{ i } is the i ^{th} sample parameter set, i=1,2,…,M.
The dynamic scheduling and SampEn computation define the function
with entries corresponding to each activity. We calculate SobolSaltelli sensitivity indices (Oakley and O’Hagan 2004; Saltelli 2008) for SampEn_{1}(Θ) and SampEn_{2}(Θ); here, we explain this process for just one of these. We denote the scalar Se, for one activity, without index, as Se=SampEn_{ n }(Θ) for n=1 or 2 (this is done for brevity in the following formulas). First, we specify an allowable range for each of the parameters, \(\theta_{j} \in [\theta^{}_{j}, \theta^{+}_{j}]\), and consider θ _{ j } as a uniformly distributed random variable on \([\theta^{}_{j}, \theta^{+}_{j}]\). This makes the SampEn for each activity a random variable with variance determined by each of the ranges of θ _{ j } and its dependence on each of these variables.
We compute first order SobolSaltelli sensitivity indices, defined as:
where V(Se) denotes the variance and \(\mathbb{E}( \mathbf {Se}  \theta_{j})\) denotes the expectation of the conditional random variable Seθ _{ j }. In the variance of the conditional expectation, \(V(\mathbb{E}( \mathbf{Se}  \theta_{j}))\), the expectation integral is taken over all variables except θ _{ j }, with the j ^{th} variable fixed, and the variance is an integral over just θ _{ j }. These sensitivity indices represent the fraction of the variance in Se that is attributed to variation in θ _{ j }. An equivalent interpretation of S _{ j } is the expected fraction by which the variance in Se will be reduced, if the value of θ _{ j } is fixed.
The S _{ j } rank the importance of each variable, θ _{ j }, in terms of how much change in Se is present when θ _{ j } is varied within \([\theta^{}_{j}, \theta^{+}_{j}]\). However, the first order indices do not provide a complete ranking of parameter importance when simultaneous variation in sets of variables is allowed (Homma and Saltelli 1996). To quantify importance of a parameter while accounting for its interaction with other variables, we calculate total effect sensitivity indices:
The \(S^{T}_{j}\) represent the expected fraction of the variance in Se remaining, if all parameters except θ _{ j } are fixed. This then accounts for how the remaining variance due to θ _{ j } can change, if θ _{∼j } is fixed at different values.
To rank the importance of each variable with respect to the variation in Se, we examine the entire set (Saltelli 2008):
The sensitivity indices have some desirable properties when applied to ranking parameters with respect to their influence on the variance of an output. If a variable does not influence the function at all, S _{ j }=0, and if a variable does not have any interaction with the other variables, \(S_{j} = S^{T}_{j}\) (Sobol 2001). In all situations, we have (Sobol 2001):
Regardless of the utility of these sensitivity indices, they can be difficult to interpret since they are dependent on the distribution of the input parameters. Changing the interval for the parameter θ _{ j }, \([\theta^{}_{j}, \theta^{+}_{j}]\), changes the indices S _{ j } and \(S^{T}_{j}\). Moreover, since this interval affects V(Se), changes to the interval θ _{ j } may affect the sensitivity indices of other parameters. This is due to the global nature of the SobolSaltelli sensitivity indices and may cause interpretation difficulties due to parameter interdependencies.
A traditional Monte Carlo approach to compute the sensitivity indices is computationally expensive due to the repeated/iterated terms such as \(V(\mathbb{E}( \mathbf{Se}  \theta _{j}))\). A variety of approaches have been suggested to bring down the computational cost (Homma and Saltelli 1996; Marrel et al. 2009; Oakley and O’Hagan 2004; Saltelli 2002; Saltelli et al. 1999). We compute approximations to the sensitivity indices using a statistical surrogate model (Marrel et al. 2009; Neal 1997; Oakley and O’Hagan 2002, 2004), or emulator, for the function Se(θ). The emulator uses a Gaussian process regression (Higdon et al. 2008; Marrel et al. 2009; Neal 1997; Williams et al. 2006), which consists of fitting a Gaussian process Se _{ g }(θ;η) to samples of Se(θ) taken at different θ parameter sets specified by the rows of the M sample matrix Θ.
The Gaussian process emulator (MacKay 1998; Neal 1997) is constructed using Bayesian Gaussian process regression. For a more complete description of this process we refer the reader to Higdon et al. (2008), Marrel et al. (2009), Oakley and O’Hagan (2002, 2004), Williams et al. (2006). First, the emulator Se _{ g }(θ;η) is a stochastic process in the variable \(\theta\in\mathbb{R}^{14}\) with state variable η. It has the property that the evaluation at any finite number of θ samples (Se _{ g }(Θ _{1}),Se _{ g }(Θ _{2}),…,Se _{ g }(Θ _{ M }))^{T} is a Gaussiandistributed Mdimensional random vector, having mean μ=μ(Θ _{1},Θ _{2},…,Θ _{ M }) and covariance Cov=Cov(Θ _{1},Θ _{2},…,Θ _{ M }).
In the Bayesian regression approach, Se _{ g } is constructed from samples of the output Se _{ i }=Se(Θ _{ i }), i=1,2,…,M. The mean and covariance of Se _{ g } are defined so that realizations of the simulated values have a maximized posterior probability given a prior distribution on the form of the covariance. The form for the covariance is specified so that when evaluating at a new parameter set, θ ^{∗}, the variance of Se _{ g }(θ ^{∗}) increases for θ ^{∗} further from the samples in the matrix Θ and goes to zero, if θ ^{∗} lies in this sample set. The mean of Se _{ g }(θ ^{∗}) is related to the sampled values so that it is equal to Se _{ i } for θ ^{∗}=Θ _{ i }. Thus, Se _{ g }(θ;η) is an interpolant of the sample values.
Sensitivity indices of \(\mathbb{E}_{\eta}(\mathbf{Se}_{g}(\theta; \eta))\) can be computed quickly once Se _{ g } is constructed from a sample set. We refer to Marrel et al. (2009), Oakley and O’Hagan (2004) for this computation. To construct the Gaussian process and to compute the sensitivity indices, we used the Los Alamos GPM/SA code^{Footnote 4} (Higdon et al. 2008; Williams et al. 2006).
Global optimization
Our goal is to find values for each of the parameters in θ for which SampEn, for the given activities, is either minimized (for increased regularity in scheduling) or maximized (for increased spontaneity). Optimizing over the complete 14dimensional parameter space can be costly. Note that this 14dimensional space is only for two activities; each additional activity adds 6 new parameters. Therefore, analyzing five activities would require optimization over a 32dimensional space, which is computationally expensive for updating a schedule dynamically.
We use the global sensitivity indices to reduce the dimensionality of the optimization problem and identify parameters that contribute very little to the variance of SampEn. In an optimization step, these parameters are then fixed, and the remaining parameter space is searched using a global optimization procedure. If the number of parameters to which SampEn is sensitive is small, this can potentially result in a cheaper optimization procedure.
Schedules may be generated so that each activity has a desired level of regularity/irregularity by maximizing a single objective function, J(θ), involving the SampEn statistics for each activity in the schedule. We define the objective function for a schedule of N activities, \(\mathcal{A}_{1}, \mathcal{A}_{2}, \dots, \mathcal{A}_{N}\), by
Here, the desired levels of SampEn for each activity are denoted by L _{ i } and weights, w _{ i }, are associated with each activity to control the importance of each term in the maximum of J(θ). It is important to note that we include the square of the absolute value in our objective function so that J(θ) is smooth.
Maximization of these types of objective functions can attain specific goals allowing for more specificity in schedule design with regards to mixtures of regular and irregular activities. For instance, in a twoactivity schedule we may choose w _{1}=w _{2}=1 and L _{1}=L _{2}=0 to obtain the objective function
Maximization of (9) generates schedules where both activities have a high SampEn and, therefore, have irregular activity demand hour time series for both activities. Alternatively, taking w _{1}=1, w _{2}=−1, and L _{1}=L _{2}=0, we get
Maximization of (10) will generate schedules in which \(\mathcal{A}_{1}\) has highly irregular demand and \(\mathcal{A}_{2}\) has very regular demand. More specific conditions can be met by specifying nonzero levels of SampEn for each activity. Setting w _{1}=−1, L _{1}=0.9, w _{2}=−0.5, and L _{2}=1.5 we get
When maximizing (11) the contribution from the term involving SampEn_{1}(θ) has twice the effect of the contribution from the term involving SampEn_{2}(θ). Therefore schedules will be generated with SampEn_{1}(θ)≈0.9, SampEn_{2}(θ)≈1.5, and SampEn_{2}(θ) farther from 1.5 than SampEn_{1}(θ) is from 0.9.
We use the harmony search (HS) global optimization algorithm (Geem et al. 2001) to explore the parameter space. HS is a metaheuristic search algorithm, inspired from the improvisation process of jazz musicians, that optimizes (minimizes or maximizes) a certain objective function. Recently, HS has been successfully applied to a variety of problems including water distribution network design (Geem 2006b), parameter estimation (Kim et al. 2001), combined heat and power economic optimization (Vasebi et al. 2007), and even sudoku solving (Geem 2007). In many cases, it has been shown to outperform other commonly used search algorithms, such as simulated annealing (Kirkpatrick et al. 1983), tabu search (Glover 1989, 1990), and evolutionary algorithms (Bäck and Schwefel 1993).
In HS, sets of parameters (referred to as a harmonies) are randomly chosen (improvised) until the harmony memory is filled. A new harmony is improvised according to a set of rules: each parameter (note) may be chosen via random selection or memory consideration with an optional pitch adjustment (adjusting a parameter up or down slightly). The goodness of the new harmony is computed (in this case, the sum of the SampEn statistics for each activity), and if the harmony is better than the worst harmony stored in the harmony memory, the new harmony replaces the previously stored value.
HS features five main parameters: max_imp determines the maximum number of improvisations (iterations), hms is the harmony memory size (the number of best harmonies that should be remembered), hmcr is the harmony memory consideration rate (how often a note is chosen via memory consideration as opposed to random selection), par is the pitch adjusting rate (how often pitch adjustment is invoked), and mpap is the maximum pitch adjustment proportion (size of the perturbation).
A number of improvements and changes have been suggested since HS’ first introduction. One change added the notion of ensemble consideration, an operation that considers relationships between decision variables (Geem 2006a). Another modification, dubbed improved harmony search, dynamically modifies the par and mpap parameters as the search progresses (Mahdavi et al. 2007). Globalbest harmony search removes the mpap parameter altogether by altering the pitchadjustment step so that values are drawn from the best harmony in the harmony memory (Omran and Mahdavi 2008). Most recently, a parameter settingfree variation was introduced that dynamically modifies both hmcr and mpap as the search progresses (Geem and Sim 2010).
For this study, we implemented the original HS algorithm in Python. The source code has been opensourced and is available on GitHub.^{Footnote 5} At the start of our HS optimization for DASim, C, D, β _{ u }, and β _{ p } are allowed to vary in the range [0,1], while α _{ u } and α _{ p } are allowed to vary in the range [0,86400]. The parameters γ _{ u } and γ _{ p } are allowed to vary in the range [0,10]. Notice, there is no need to normalize all inputs to a common range since the sensitivity indices rank the inputs relative to their ranges. HS is then combined with global sensitivity analysis to reduce the dimensionality of the search space, which is done iteratively as follows:
Harmony Search with Global Sensitivity Analysis Algorithm

1.
Provide allowable intervals for each parameter \(\theta_{j} \in [\theta^{}_{j},\theta^{+}_{j}]\), j=1,2,…,14.

2.
M samples of SampEn are taken at different parameter sets in the sample matrix Θ _{ M×14}.

3.
Samples used to construct a Gaussian process emulator, Se _{ g }(θ;η).

4.
Sensitivity indices, \(\{ S_{j}, S^{T}_{j} \}\), j=1,2,…,14, are computed from Se _{ g }(θ;η).

5.
A subset of parameters, \((\theta_{k_{1}}, \theta_{k_{2}}, \dots, \theta_{k_{d}})\), with high sensitivity values (see Fig. 4) are chosen on which to perform HS. The remaining parameters are fixed (note that we arbitrarily fix them at the mean value of their interval). Here, we use notation for an arbitrary subset of distinct parameter subscripts of size d≤14, {k _{1},k _{2},…,k _{ d }}⊂{1,2,…,14}.

6.
HS is performed over the parameter subset to maximize a given functional of SampEn statistics for each activity.
Each 12week simulation of DASim for the 10person test population takes approximately 5 seconds wall time to complete. We initialize max_imp to 2000, hms to 50, hmcr to 0.75, par to 0.5, and mpap to 0.25. HS consistently converged to solutions of approximately the same fitness over many test runs, each with initial harmonies selected uniformly at random. As a result, we determined that a parameter sweep of the HS parameters was unnecessary.
Results
The global sensitivity analysis (Fig. 4) shows that the offset parameters, (α _{ u1},α _{ p1},α _{ u2},α _{ p2}), have the largest effect on the variation of the sample entropy for both activities. Thus, these α parameters have the most impact on regularity. Recall that, as seen in Table 2, α _{ u } and α _{ p } control the activity duration in the utility and activity frequency in the priority, respectively.
We adjusted schedules to consist of two irregularly performed activities. This was done by maximizing (9), adjusting only four parameters, (α _{ u1},α _{ p1},α _{ u2},α _{ p2}), using the HS global optimization algorithm. The results of the fourparameter space were compared against tuning the entire 14dimensional space. In Fig. 5, we show that, for small numbers of HS iterations (i.e., less than 350), the fourdimensional subspace search performs better on average; we can reach much closer to the maximum SampEn in fewer iterations than a search over the whole parameter space. While running 500 iterations of HS over the whole parameter space will result in a better maximum SampEn, our results show that HS over the fourdimensional space will reach 90 % of the maximum SampEn with fewer than 100 iterations. Therefore, the search space should be chosen based on computational requirements.
We performed random sampling over the entire 14dimensional parameter space and compared the variance in the SampEn for each activity against only varying the αparameters. Our results show that the variation in SampEn caused by only varying the αparameters was responsible for about 99 % of the variance in SampEn when the entire parameter set was allowed to vary. This result was consistent for each activity. This shows that our sensitivity analysis with the emulation gives realistic results and that optimization over the fourdimensional parameter space will suffice to approximate the minimum or maximum of the sample entropy or a functional thereof.
The maximization of SampEn over the αparameters creates a schedule with a great deal of spontaneity. In addition to maximizing the sum of both SampEn statistics, we preferentially maximized and minimized each activity individually, ignoring the other activity. These SampEnminimized and maximized schedules, along with a schedule that has SampEn equal to the mean of the minimized and maximized SampEn schedules, are shown in Fig. 6. We see a visual difference: DASim output for a maximized SampEn schedule is more variable over a larger range than minimized SampEn schedules. Also, when SampEn is minimized, regions of constant demand hours are more prevalent, which is to be expected for activities considered on an hourly cycle (i.e., when m=1).
Discussion
This study focuses on schedule realism in a human activity model, but the methods presented here are generic and can be applied to a variety of other problems where a specific property in a highdimensional model is desired. These types of highdimensional tuning/optimization problems are ubiquitous in modern complex computer simulations. Thus, there is a significant need for methods of automatic tuning that incorporate systematic dimension reduction. Our combination of global sensitivity analysis and a global optimization method is effective for the application presented here. Additionally, it is sufficiently general to warrant application in many other areas.
Dynamic scheduling for synthetic populations is necessary to make simulations of human behavior phenomena more realistic. The dynamic scheduling program DASim was designed to aid in largescale agentbased infrastructure simulation (e.g., transportation and epidemic modeling). DASim can generate schedules that are different over demographics and change in response to events, such as disease outbreaks and nonpharmaceutical interventions.
To evaluate the realism of a dynamicallygenerated schedule, we must select metrics on which it should be evaluated. We presented a method for tuning a dynamic scheduling model for schedule regularity, which we quantify using the sample entropy (SampEn) statistic applied to population demand hours. Adjusting the SampEn statistic requires working with a highdimensional optimization problem. We used global sensitivity analysis and statistical surrogate models to significantly lower the dimensionality of the search space. A global optimization algorithm, harmony search (HS), was used to efficiently tune the degree of regularity of a schedule.
Some of the major results of our study include:

Demand hour regularity of activities over a population can be controlled by tuning the SampEn statistic.

DASim parameters that most influence the SampEn statistic can be identified using global sensitivity analysis combined with a statistical surrogate model. We determined that the α parameters in the utility and priority functions have the largest effect on the variation of the sample entropy of an activity.

DASim parameters that result in close to optimal (i.e., minimized/maximized) SampEn values can be discovered using HS. Furthermore, this can be done efficiently with many fewer iterations by searching a parameter subspace determined by global sensitivity analysis first (just the α parameters in this study).
While we have shown how to reduce the search space and computation time when analyzing parameter importance under a particular metric, this process still takes a significant amount of compute time. We tuned our parameters in a reduced problem environment, using a 10person population. Although this approach works for the measure of regularity discussed in this paper, it may not work for more complex measures of interest. Our initial search space of 14 dimensions is still relatively small; some simulations may have many tens, hundreds, or even thousands of dimensions. Understanding parameter importance and interactions in such highdimensional spaces may prove difficult or even impossible in some instances using our methods.
Our analysis is based on hourly regularity for demand hours of schedules. Many other granularities may be desirable. For example, work may be regular every 12 hours. Some studies suggest that the size of the dataset be at least 10^{m} and preferably at least 30^{m} in the approximate entropy (ApEn) algorithm (Pincus and Goldberger 1994). While this is certainly possible for small values of m (recall that m=1 in this paper), larger values of m quickly become problematic (e.g., work would require at least 10^{12} demand hour data points). Alternative measures of regularity may be considered for larger values of m.
We are considering other evaluative measures to quantify additional properties, beyond regularity, of a schedule’s realism. Here, we analyzed measures of regularity of time usage and found that it is controlled by a small set of the defining parameters in the model. Another possibility would be to look at quantifying the efficiency of a dynamicallygenerated schedule in terms of location usage, whether an individual’s schedule is geographically arranged in a sensible way given his or her current location. One could also look at the total percentage of time spent on an activity. Evaluation of a measure of each of these effects would lend testable realism to a generated schedule for a population. The use of statistical emulation, global sensitivity analysis, and optimization as demonstrated here would then allow for efficient tuning of these measures.
For models that rely on human activity patterns and movement, such as disease and infrastructure models, capturing realistic activity patterns is crucial for decision support. Therefore, new techniques such as the ones proposed here are needed for analyzing highdimensional problems. However, more research still needs to be done related to efficiently solving these problems computationally and understanding human activity patterns and behavior.
References
Bäck T, Schwefel HP (1993) An overview of evolutionary algorithms for parameter optimization. Evol Comput 1(1):1–23. doi:10.1162/evco.1993.1.1.1. http://www.mitpressjournals.org/doi/abs/10.1162/evco.1993.1.1.1
Berry BJL, Kiel LD, Elliott E (2002) Adaptive agents, intelligence, and emergent human organization: capturing complexity through agentbased modeling. Proc Natl Acad Sci USA 99(Suppl 3):7187–7188. doi:10.1073/pnas.092078899. http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=128579&tool=pmcentrez&rendertype=abstract
Bhat CR, Frusti T, Zhao H, Schönfelder S, Axhausen KW (2004) Intershopping duration: an analysis using multiweek data. Transp Res, Part B, Methodol 38(1):39–60. doi:10.1016/S01912615(02)000930. http://linkinghub.elsevier.com/retrieve/pii/S0191261502000930
Brockmann D, Hufnagel L, Geisel T (2006) The scaling laws of human travel. Nature 439(7075):462–465. doi:10.1038/nature04292. http://www.nature.com/nature/journal/v439/n7075/full/nature04292.html
Burton RM (2003) Computational laboratories for organization science: questions, validity and docking. Comput Math Organ Theory 9(2):91–108. doi:10.1023/B:CMOT.0000022750.46976.3c. http://link.springer.com/10.1023/B:CMOT.0000022750.46976.3c
Burton RM, Obel, BR (1995) The validity of computational models in organization science: from model realism to purpose of the model. Comput Math Organ Theory 1(1):57–71. doi:10.1007/BF01307828. http://link.springer.com/10.1007/BF01307828
Colizza V, Barrat A, Barthelemy M, Valleron AJ, Vespignani A (2007) Modeling the worldwide spread of pandemic influenza: baseline case and containment interventions. PLoS. Medicine 4(1):e13. doi:10.1371/journal.pmed.0040013. http://www.plosmedicine.org/article/info:doi/10.1371/journal.pmed.0040013
Del Valle SY, Stroud PD, Mniszewski SM (2008) Dynamic contact patterns and social structure in realistic social networks. In: Schneider HL, Huber LM (eds) Social networks: development, evaluation and influence. Nova Science Publishers, New York, pp 201–216. https://www.novapublishers.com/catalog/product_info.php?products_id=7480
Eubank S, Guclu H, Kumar VSA, Marathe MV, Srinivasan A, Toroczkai Z, Wang N (2004) Modelling disease outbreaks in realistic urban social networks. Nature 429(6988):180–184. doi:10.1038/nature02534.1. http://www.nature.com/nature/journal/v429/n6988/abs/nature02541.html
Galli E, Cuéllar L, Eidenbenz S, Ewers M, Mniszewski SM, Teuscher C (2009) ActivitySim: largescale agentbased activity generation for infrastructure simulation. In: Proceedings of the 2009 spring simulation multiconference, San Diego, California, pp 16:1–16:9. http://dl.acm.org/citation.cfm?id=1639826
Geem ZW (2006a) Improved harmony search from ensemble of music players. In: Gabrys B, Howlett R, Jain L (eds) Knowledgebased intelligent information and engineering systems. Springer, Berlin, pp 86–93. doi:10.1007/11892960_11. http://www.springerlink.com/content/b382536117777v22/
Geem ZW (2006b) Optimal cost design of water distribution networks using harmony search. Eng Optim 38(3):259–277. doi:10.1080/03052150500467430. http://www.tandfonline.com/doi/abs/10.1080/03052150500467430
Geem ZW (2007) Harmony search algorithm for solving sudoku. In: Apolloni B, Howlett RJ, Jain L (eds) Knowledgebased intelligent information and engineering systems. Springer, Berlin, pp 371–378. doi:10.1007/9783540748199_46. http://link.springer.com/chapter/10.1007/9783540748199_46
Geem ZW, Sim KB (2010) Parametersettingfree harmony search algorithm. Appl Comput Math 217(8):3881–3889. doi:10.1016/j.amc.2010.09.049. http://www.sciencedirect.com/science/article/pii/S009630031001009X
Geem ZW, Kim JH, Loganathan GV (2001) A new heuristic optimization algorithm: harmony search. Simulation 76(2):60–68. doi:10.1177/003754970107600201. http://sim.sagepub.com/cgi/doi/10.1177/003754970107600201
Germann TC, Kadau K, Longini IM, Macken CA (2006) Mitigation strategies for pandemic influenza in the United States. Proc Natl Acad Sci USA 103(15):5935–5940. doi:10.1073/pnas.0601266103. http://www.pnas.org/content/103/15/5935.short
Glover F (1989) Tabu search—part I. ORSA J Comput 1(3):190–206. doi:10.1287/ijoc.1.3.190. http://pubsonline.informs.org/doi/abs/10.1287/ijoc.1.3.190
Glover F (1990) Tabu search—part II. ORSA J Comput 2(1):4–32. doi:10.1287/ijoc.2.1.4. http://pubsonline.informs.org/doi/abs/10.1287/ijoc.2.1.4
Goldberger AL, Peng CK, Lipsitz LA (2002) What is physiologic complexity and how does it change with aging and disease? Neurobiol Aging 23(1):23–26. http://www.sciencedirect.com/science/article/pii/S0197458001002664
González MC, Hidalgo CA, Barabási AL (2008) Understanding individual human mobility patterns. Nature 453(7196):779–782. doi:10.1038/nature06958. http://www.nature.com/nature/journal/v453/n7196/full/nature06958.html
Higdon D, Gattiker J, Williams B, Rightley M (2008) Computer model calibration using highdimensional output. J Am Stat Assoc 103(482):570–583
Homma T, Saltelli A (1996) Importance measures in global sensitivity analysis of nonlinear models. Reliab Eng Syst Saf 52(1):1–17
Hornero R, Aboy M, Abásolo D, McNames J, Goldstein B (2005) Interpretation of approximate entropy: analysis of intracranial pressure approximate entropy during acute intracranial hypertension. IEEE Trans Biomed Eng 52(10):1671–1680. doi:10.1109/TBME.2005.855722. http://ieeexplore.ieee.org/xpl/articleDetails.jsp?arnumber=1510851
Hornero R, Abásolo D, Jimeno N, Sánchez CI, Poza J, Aboy M (2006) Variability, regularity, and complexity of time series generated by schizophrenic patients and control subjects. IEEE Trans Biomed Eng 53(2):210–218. doi:10.1109/TBME.2005.862547
Joh CH, Arentze TA, Timmermans HJP (2001) Understanding activity scheduling and rescheduling behaviour: theory and numerical illustration. GeoJournal 53(4):359–371. http://www.springerlink.com/index/V617G57M563V027H.pdf
Kim JH, Geem ZW, Kim ES (2001) Parameter estimation of the nonlinear Muskingum model using harmony search. J Am Water Resour Assoc 37(5):1131–1138. doi:10.1111/j.17521688.2001.tb03627.x. http://doi.wiley.com/10.1111/j.17521688.2001.tb03627.x
Kirkpatrick S, Gelatt CD Jr, Vecchi MP (1983) Optimization by simulated annealing. Science 220(4598):671–680. doi:10.1126/science.220.4598.671. http://www.sciencemag.org/content/220/4598/671.abstract
Kitamura R, Hoorn TVD (1987) Regularity and irreversibility of weekly travel behavior. Transportation 14(3):227–251. http://www.springerlink.com/index/P65744J47W10561N.pdf
Kitamura R, Chen C, Pendyala RM, Narayanan R (2000) Microsimulation of daily activitytravel patterns for travel demand forecasting. Transportation 27(1):25–51. http://www.springerlink.com/index/X0H2736477711X33.pdf
Kitamura R, Yamamoto T, Susilo YO, Axhausen KW (2006) How routine is a routine? An analysis of the daytoday variability in prism vertex location. Transp Res, Part A, Policy Pract 40(3):259–279. doi:10.1016/j.tra.2005.07.002. http://linkinghub.elsevier.com/retrieve/pii/S0965856405001011
Lake DE, Richman JS, Griffin MP, Moorman JR (2002) Sample entropy analysis of neonatal heart rate variability. Am J Physiol, Regul Integr Comp Physiol 283(3):R789–797. doi:10.1152/ajpregu.00069.2002. http://www.ncbi.nlm.nih.gov/pubmed/12185014
Lourenço HR, Martin OC, Stützle T (2003) Iterated local search. In: Glover F, Kochenberger GA (eds) Handbook of metaheuristics. Kluwer Academic, Dordrecht, pp 321–354. Chap. 11. http://arxiv.org/abs/math/0102188
MacKay D (1998) Introduction to Gaussian processes. NATO Adv Stud Inst Ser F Comput Syst Sci 168:133–166
Macy MW, Willer R (2002) From factors to actors: computational sociology and agentbased modeling. Annu Rev Sociol 28:143–166. doi:10.1146/annurev.soc.28.110601.141117. http://www.annualreviews.org/doi/abs/10.1146/annurev.soc.28.110601.141117
Mahdavi M, Fesanghary M, Damangir E (2007) An improved harmony search algorithm for solving optimization problems. Appl Comput Math 188(2):1567–1579. doi:10.1016/j.amc.2006.11.033. http://www.sciencedirect.com/science/article/pii/S0096300306015098
Marrel A, Iooss B, Laurent B, Roustant O (2009) Calculations of Sobol indices for the Gaussian process metamodel. Reliab Eng Syst Saf 94(3):742–751
McKinley RA, McIntire LK, Schmidt R, Repperger DW, Caldwell JA (2011) Evaluation of eye metrics as a detector of fatigue. Human factors. Hum Factors 53(4):403–414. doi:10.1177/0018720811411297.INTRODUCTION. http://hfs.sagepub.com/content/53/4/403.short
Mniszewski SM, Del Valle S, Stroud PD, Riese JM, Sydoriak SJ (2008) Pandemic simulation of antivirals + school closures: buying time until strainspecific vaccine is available. Comput Math Organ Theory 14(3):209–221. doi:10.1007/s1058800890271. http://www.springerlink.com/content/67q2l550345438v2/
Neal R (1997) Monte Carlo implementation of Gaussian process models for Bayesian regression and classification. arXiv:physics/9701026
Oakley J, O’Hagan A (2002) Bayesian inference for the uncertainty distribution of computer model outputs. Biometrika 89(4):769–784
Oakley J, O’Hagan A (2004) Probabilistic sensitivity analysis of complex models: a Bayesian approach. J R Stat Soc, Ser B, Stat Methodol 66(3):751–769
Omran MGH, Mahdavi M (2008) Globalbest harmony search. Appl Comput Math 198(2):643–656. doi:10.1016/j.amc.2007.09.004. http://www.sciencedirect.com/science/article/pii/S0096300307009320
Paleshi A, Evans GW, Heragu SS, Moghaddam KS (2011) Simulation of mitigation strategies for a pandemic influenza. In: Proceedings of the 2011 winter simulation conference, Phoenix, Arizona, pp 1340–1348. http://ieeexplore.ieee.org/xpls/abs_all.jsp?arnumber=6147855
Pan X, Han CS, Dauber K, Law KH (2007) A multiagent based framework for the simulation of human and social behaviors during emergency evacuations. AI Soc 22(2):113–132. doi:10.1007/s0014600701261. http://www.springerlink.com/index/10.1007/s0014600701261
Pincus SM (1991) Approximate entropy as a measure of system complexity. Proc Natl Acad Sci USA 88(6):2297–2301. http://www.pubmedcentral.nih.gov/articlerender.fcgi?artid=51218&tool=pmcentrez&rendertype=abstract
Pincus SM, Goldberger AL (1994) Physiological timeseries analysis: what does regularity quantify? Am J Physiol, Heart Circ Physiol 266(4):H1643–H1656. http://ajpheart.physiology.org/content/266/4/H1643.short
Pincus SM, Kalman RE (2004) Irregularity, volatility, risk, and financial market time series. Proc Natl Acad Sci USA 101(38):13,709–13,714. doi:10.1073/pnas.0405168101. http://www.pnas.org/content/101/38/13709
Rabitz H, Aliş OF (1999) General foundations of highdimensional model representations. J Math Chem 25(2–3):197–233. doi:10.1023/A:1019188517934. http://link.springer.com/article/10.1023/A:1019188517934
Richman JS, Lake DE, Moorman JR (2004) Sample entropy. In: Johnson ML, Brand L (eds) Methods in enzymology, vol 384. Academic Press, San Diego, pp 172–184. doi:10.1016/S00766879(04)840114. http://www.sciencedirect.com/science/article/pii/S0076687904840114
Richman JS, Moorman JR (2000) Physiological timeseries analysis using approximate entropy and sample entropy. Am J Physiol, Heart Circ Physiol 278(6):H2039–H2049. http://ajpheart.physiology.org/content/278/6/H2039.short
Saltelli A (2002) Making best use of model evaluations to compute sensitivity indices. Comput Phys Commun 145(2):280–297
Saltelli A (2008) Global sensitivity analysis: the primer. Wiley Online Library
Saltelli A, Tarantola S, Chan KS (1999) A quantitative modelindependent method for global sensitivity analysis of model output. Technometrics 41(1):39–56
Schlich R, Axhausen KW (2003) Habitual travel behaviour: evidence from a sixweek travel diary. Transportation 30(1):13–36. http://www.springerlink.com/index/vxpkq226606v3062.pdf
Sobol I (2001) Global sensitivity indices for nonlinear mathematical models and their Monte Carlo estimates. Int J Math Comput Simul 55(1–3):271–280
Stroud PD, Del Valle S, Sydoriak SJ, Riese JM, Mniszewski SM (2007) Spatial dynamics of pandemic influenza in a massive artificial society. J Artif Soc Soc Simul 10(4):9. http://jasss.soc.surrey.ac.uk/10/4/9.html
Varela M, Jimenez L, Fariña R (2003) Complexity analysis of the temperature curve: new information from body temperature. Eur J Appl Physiol 89(3–4):230–237. doi:10.1007/s0042100207902. http://www.ncbi.nlm.nih.gov/pubmed/12736830
Vasebi A, Fesanghary M, Bathaee SMT (2007) Combined heat and power economic dispatch by harmony search algorithm. Int J Electr Power Energy Syst 29(10):713–719. doi:10.1016/j.ijepes.2007.06.006. http://www.sciencedirect.com/science/article/pii/S0142061507000634
Williams B, Higdon D, Gattiker J, Moore L, McKay M, KellerMcNulty S (2006) Combining experimental data and computer simulations, with an application to flyer plate experiments. Bayesian Anal 1(4):765–792
Acknowledgements
We would like to acknowledge the Institutional Computing Program at Los Alamos National Laboratory for use of their HPC cluster resources. This research has been supported at Los Alamos National Laboratory under the Department of Energy contract DEAC5206NA25396 and a grant from the NIH/NIGMS in the Models of Infectious Disease Agent Study (MIDAS) program U01GM09765801.
Author information
Authors and Affiliations
Corresponding author
Rights and permissions
Open Access This article is distributed under the terms of the Creative Commons Attribution License which permits any use, distribution, and reproduction in any medium, provided the original author(s) and the source are credited.
About this article
Cite this article
Fairchild, G., Hickmann, K.S., Mniszewski, S.M. et al. Optimizing human activity patterns using global sensitivity analysis. Comput Math Organ Theory 20, 394–416 (2014). https://doi.org/10.1007/s1058801391710
Published:
Issue Date:
DOI: https://doi.org/10.1007/s1058801391710
Keywords
 Global optimization
 Global sensitivity analysis
 Sample entropy
 Agentbased modeling
 Bayesian Gaussian process regression
 Harmony search