Skip to main content

Simulation

  • Chapter
  • First Online:
  • 194k Accesses

Part of the book series: International Series in Operations Research & Management Science ((ISOR,volume 264))

Abstract

The goal of this chapter is to provide an understanding of how simulation can be an effective business analytics technique for informed decision making. Our focus will be on applications and to understand the steps in building a simulation model and interpreting the results of the model; the theoretical background can be found in the reference textbooks described at the end of the chapter. Simulation is a practical approach to decision making under uncertainty in different situations. For example: (1) We have an analytical model and we would like to compare its output against a simulation of the system. (2) We do not have an analytical model for the entire system but understand the various parts of the system and their dynamics well enough to model them. In this case, simulation is useful in putting together the various well-understood parts to examine the results. In all these cases, the underlying uncertainty is described, the model developed in a systematic way to model the decision variables, when necessary describe the dynamics of the system, and use simulation to capture values of the relevant outcomes. This chapter sets out the steps necessary to do all the above in a systematic manner.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   99.00
Price excludes VAT (USA)
  • Available as EPUB and PDF
  • Read on any device
  • Instant download
  • Own it forever
Hardcover Book
USD   129.99
Price excludes VAT (USA)
  • Durable hardcover edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Notes

  1. 1.

    http://www.palisade.com/decisiontools_suite (accessed on Jan 25, 2018).

  2. 2.

    http://www.palisade.com/risk/ (accessed on Jan 25, 2018).

  3. 3.

    www.priceline.com (accessed on Aug 17, 2017).

  4. 4.

    Anderson and Wilson (2011).

References

  • Anderson, C. K., & Wilson, J. G. (2011). Name-your-own price auction mechanisms – Modeling and future implications. Journal of Revenue and Pricing Management, 10(1), 32–39. https://doi.org/10.1057/rpm.2010.46.

    Article  Google Scholar 

  • Davison, M. (2014). Quantitative finance: A simulation-based approach using excel. London: Chapman and Hall.

    Book  Google Scholar 

  • GE Look ahead. (2015). The digital twin: Could this be the 21st-century approach to productivity enhancements. Retrieved May 21, 2017, from http://gelookahead.economist.com/digital-twin/.

  • Holland, C., Levis, J., Nuggehalli, R., Santilli, B., & Winters, J. (2017). UPS optimizes delivery routes. Interfaces, 47(1), 8–23.

    Article  Google Scholar 

  • Jian, N., Freund, D., Wiberg, H., & Henderson, S. (2016). Simulation optimization for a large-scale bike-sharing system. In T. Roeder, P. Frazier, R. Szechtman, E. Zhou, T. Hushchka, & S. Chick (Eds.), Proceedings of the 2016 winter simulation conference.

    Google Scholar 

  • Law, A. (2014). Simulation modeling and analysis. McGraw-Hill Series in Industrial Engineering and Management.

    Google Scholar 

  • Porteus, E. (2002). Foundations of stochastic inventory theory. Stanford business books, Stanford California.

    Google Scholar 

  • Ross, S. (2013). Simulation. Amsterdam: Elsevier.

    Google Scholar 

  • Stine, R., & Foster, D. (2014). Statistics for business decision making and analysis. London: Pearson.

    Google Scholar 

  • Technology Review. (2017). September 2017 edition.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Sumit Kunnumkal .

Editor information

Editors and Affiliations

1 Electronic Supplementary Material

Supplementary Data 10.1

Priceline_Hotelbids (XLSX 11 kb)

Supplementary Data 10.2

Watch_bids (XLSX 30 kb)

Appendices

Appendix 1: Generating Random Numbers on a Computer

Virtually all computer simulations use mathematical algorithms to generate random variables. These are potentially very fast, and can be replicated at will. However, it should be clear that no such sequence can be truly random. Consequently, we refer to such a sequence as a pseudo-random sequence. This is because one can in fact predict the sequence of numbers generated provided one had a sophisticated knowledge of the way the algorithm is designed to operate. This is not a huge concern from a practical standpoint as most of the popular algorithms generate sequences that are virtually indistinguishable from a truly random sequence.

A commonly used generator is the linear congruential generator (LCG), which obtains a sequence x 0 , x 1 , … of integers via the recursion

$$ {x}_{n+1}=\left({ax}_n+c\right)\ \mathit{\operatorname{mod}}\ m, $$

where a, c, and m are integers also known, respectively, as the multiplier, increment, and modulus. The number x 0 is called the initial seed. The mod operator applied to two numbers returns the remainder when the first number is divided by the second. So 5 mod 3 = 2 and 6 mod 3 = 0. Since each number in the sequence generated lies between 0 and m − 1, x n /m is a number that lies between 0 and 1. Therefore, the sequence x 1 /m, x 2 /m, … has the appearance of a sequence of random numbers generated from the uniform distribution between 0 and 1. However, note that since each x n lies between 0 and m − 1, the sequence must repeat itself after a finite number of values. Therefore, we would like to choose the values of a, c, and m so that a large number of values can be generated before the sequence repeats itself. Moreover, once the numbers a, c, m, and x 0 are known, the sequence of numbers generated by the LCG is completely deterministic. However, if these numbers were unknown to us and we were only observing the sequence generated by the LCG, it would be very hard for us to distinguish this sequence from a truly random sequence. For example, Fig. 10.16 shows the frequency histogram of the first 100 numbers of the sequence x 1 /m, x 2 /m, … obtained by setting a = 16,807, m = 2,147,483,647, c = 0, and x 0 = 33,554,432. It has the appearance of being uniformly distributed between 0 and 1, and it can be verified that the uniform distribution indeed is the distribution that best fits the data. By setting the seed of a random number generator, we fix the sequence of numbers that is generated by the algorithm. Thus, we are also able to easily replicate the simulation.

Fig. 10.16
figure 16

Histogram of the sequence generated by the linear congruential generator with a = 16,807, m = 2,147,483,647, c = 0, and x 0 = 33,554,432

The LCG algorithm generates a sequence of numbers that have the appearance of coming from a uniform distribution. It is possible to build on this to generate pseudo-random numbers from other probability distributions (both discrete as well as continuous distributions). We refer the reader to Ross (2013) and Law (2014) for more details on the algorithms and their properties.

Appendix 2: Fitting Distributions to Data

When we build a simulation model, a key assumption is that we know the distributions of the input random variables. In practice, the distributions of the input random variables come from statistically testing the observed data to find out the distribution that best fits the data. The statistical tests are referred to as goodness-of-fit tests. The underlying idea is to compare the observed distribution with the hypothesized distribution and measure the discrepancy between the two. If the discrepancy is small, then it suggests that the hypothesized distribution is a good fit to the observed data. Otherwise, the hypothesized distribution is a poor fit. The error metrics and the hypothesis tests can be formalized; see Ross (2013) and Law (2014) for details. Here we only focus on how we can use @Risk to run the goodness-of-fit tests.

Given historical data, we can use the “Distribution Fitting” tool in @Risk to find out the distribution that best matches the data (see Fig. 10.17). After describing the nature of the data set (discrete or continuous) as well as additional details regarding the range of values it can take (minimum, maximum values), we run the Distribution Fitting tool. This gives us a range of distributions and their corresponding fit values. We can broadly think of the fit values as measuring the error between the observed data and the hypothesized distribution and a smaller fit value indicates a better fit in general. There are different goodness-of-fit tests available. The fit values as well as the relative ranking of the distributions in terms of their fit can vary depending on the test that is used. We refer the reader to Ross (2013) and Law (2014) for more details regarding the different goodness-of-fit tests and when a given test is more applicable (Fig. 10.18).

Fig. 10.17
figure 17

Distribution Fitting tool in @Risk

Fig. 10.18
figure 18

Goodness-of-fit test results

Appendix 3: Simulation in Excel

Excel has some basic simulation capabilities. The RAND(.) function generates a (pseudo) random number from the uniform distribution between 0 and 1. The RANDBETWEEN(.,.) function takes two arguments a and b, and generates an integer that is uniformly distributed between a and b. There are methods that can use this sequence of uniform random numbers as an input to generate sequences from other probability distributions (see Ross (2013) and Law (2014) for more details). A limitation of using the RAND(.) and RANDBETWEEN(.,.) functions is that it is not possible to fix the initial seed and so it is not possible to easily replicate the results of the simulation.

As mentioned, there are a number of other packages and programming languages that can be used to build simulation models. For example, the code snippet below implements the fashion retailer simulation described in Example 10.1 in R:

Sales = rnorm(1000, 980, 300)

Profit = 250 * Sales − 150,000.

The first line generates a random input sample of size 1000 from the normal distribution with a mean of 980 and a standard deviation of 300. The second line generates a random sample of the output (profit) by using the random sample of the input and the relation between the profit and the sales. Note that the seed as well as the random number generator can be specified in R; refer the R documentation.

Rights and permissions

Reprints and permissions

Copyright information

© 2019 Springer Nature Switzerland AG

About this chapter

Check for updates. Verify currency and authenticity via CrossMark

Cite this chapter

Kunnumkal, S. (2019). Simulation. In: Pochiraju, B., Seshadri, S. (eds) Essentials of Business Analytics. International Series in Operations Research & Management Science, vol 264. Springer, Cham. https://doi.org/10.1007/978-3-319-68837-4_10

Download citation

Publish with us

Policies and ethics