Advertisement

Compact Representation of Solution Vectors in Kronecker-Based Markovian Analysis

  • Peter Buchholz
  • Tuǧrul DayarEmail author
  • Jan Kriege
  • M. Can Orhan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9826)

Abstract

It is well known that the infinitesimal generator underlying a multi-dimensional Markov chain with a relatively large reachable state space can be represented compactly on a computer in the form of a block matrix in which each nonzero block is expressed as a sum of Kronecker products of smaller matrices. Nevertheless, solution vectors used in the analysis of such Kronecker-based Markovian representations still require memory proportional to the size of the reachable state space, and this becomes a bigger problem as the number of dimensions increases. The current paper shows that it is possible to use the hierarchical Tucker decomposition (HTD) to store the solution vectors during Kronecker-based Markovian analysis relatively compactly and still carry out the basic operation of vector-matrix multiplication in Kronecker form relatively efficiently. Numerical experiments on two different problems of varying sizes indicate that larger memory savings are obtained with the HTD approach as the number of dimensions increases.

Keywords

Markov chains Kronecker products Hierarchical Tucker decomposition Reachable state space Compact vectors 

Notes

Acknowledgement

This work is supported by the Alexander von Humboldt Foundation through the Research Group Linkage Programme. The research of the last author is supported by The Scientific and Technological Research Council of Turkey.

References

  1. 1.
    APNN-Toolbox. Abstract Petri Net Notation Toolbox. http://www4.cs.uni-dortmund.de/APNN-TOOLBOX
  2. 2.
    Bause, F., Buchholz, P., Kemper, P.: A toolbox for functional and quantitative analysis of DEDS. In: Puigjaner, R., Savino, N.N., Serra, B. (eds.) TOOLS 1998. LNCS, vol. 1469, pp. 356–359. Springer, Heidelberg (1998)CrossRefGoogle Scholar
  3. 3.
    Buchholz, P.: Hierarchical structuring of superposed GSPNs. IEEE Trans. Softw. Eng. 25(2), 166–181 (1999)MathSciNetCrossRefGoogle Scholar
  4. 4.
    Buchholz, P., Dayar, T.: On the convergence of a class of multilevel methods for large, sparse Markov chains. SIAM J. Matrix Anal. Appl. 29(3), 1025–1049 (2007)MathSciNetCrossRefzbMATHGoogle Scholar
  5. 5.
    Buchholz, P., Kemper, P.: Compact representations of probability distributions in the analysis of superposed GSPNs. In: Proceedings of the 9th International Workshop on Petri Nets and Performance Models, Aachen, Germany, pp. 81–90. IEEE Press, New York, September 2001Google Scholar
  6. 6.
    Buchholz, P., Ciardo, G., Donatelli, S., Kemper, P.: Complexity of memory-efficient Kronecker operations with applications to the solution of Markov models. INFORMS J. Comput. 12(3), 203–222 (2000)MathSciNetCrossRefzbMATHGoogle Scholar
  7. 7.
    Dayar, T.: Analyzing Markov Chains using Kronecker Products: Theory and Applications. Springer, New York (2012)CrossRefzbMATHGoogle Scholar
  8. 8.
    Dayar, T., Orhan, M.C.: On vector-Kronecker product multiplication with rectangular factors. SIAM J. Sci. Comput. 37(5), S526–S543 (2015)CrossRefzbMATHGoogle Scholar
  9. 9.
    Dayar, T., Orhan, M.C.: Cartesian product partitioning of multi-dimensional reachable state spaces. Probab. Eng. Inf. Sci. 30(3), 413–430 (2016)MathSciNetCrossRefGoogle Scholar
  10. 10.
    Fernandes, P., Plateau, B., Stewart, W.J.: Efficient descriptor-vector multiplications in stochastic automata networks. J. ACM 45(3), 381–414 (1998)MathSciNetCrossRefzbMATHGoogle Scholar
  11. 11.
    Golub, G.H., Van Loan, C.F.: Matrix Computations, 4th edn. Johns Hopkins University Press, Baltimore (2012)zbMATHGoogle Scholar
  12. 12.
    Hackbusch, W.: Tensor Spaces and Numerical Tensor Calculus. Springer, Heidelberg (2012)CrossRefzbMATHGoogle Scholar
  13. 13.
    Kressner, D., Macedo, F.: Low-rank tensor methods for communicating Markov processes. In: Norman, G., Sanders, W. (eds.) QEST 2014. LNCS, vol. 8657, pp. 25–40. Springer, Heidelberg (2014)Google Scholar
  14. 14.
    Kressner, D., Tobler, C.: htucker — A Matlab toolbox for tensors in hierarchical Tucker format. Technical report 2012-02, Mathematics Institute of Computational Science and Engineering, Lausanne, Switzerland, August 2012. http://anchp.epfl.ch/htucker
  15. 15.
    Kressner, D., Tobler, C.: Algorithm 941: htucker — a matlab toolbox for tensors in hierarchical Tucker format. ACM Trans. Math. Softw. 40(3), 22 (2014)MathSciNetCrossRefzbMATHGoogle Scholar
  16. 16.
    Kwiatkowska, M., Mehmood, R., Norman, G., Parker, D.: A symbolic out-of-core solution method for Markov models. Electron. Notes Theor. Comput. Sci. 68(4), 589–604 (2002)CrossRefzbMATHGoogle Scholar
  17. 17.
    Netlib, A.: Collection of Mathematical Software, Papers, and Databases. http://www.netlib.org
  18. 18.
    Oseledets, I.V.: Tensor-train decomposition. SIAM J. Sci. Comput. 33(5), 2295–2317 (2011)MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    Plateau, B.: On the stochastic structure of parallelism and synchronization models for distributed algorithms. Perform. Eval. Rev. 13(2), 147–154 (1985)CrossRefGoogle Scholar
  20. 20.
    Plateau, B., Fourneau, J.-M.: A methodology for solving Markov models of parallel systems. J. Parallel Distrib. Comput. 12(4), 370–837 (1991)CrossRefGoogle Scholar
  21. 21.
    Stewart, W.J.: Introduction to the Numerical Solution of Markov Chains. Princeton University Press, Princeton (1994)zbMATHGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Peter Buchholz
    • 1
  • Tuǧrul Dayar
    • 2
    Email author
  • Jan Kriege
    • 1
  • M. Can Orhan
    • 2
  1. 1.Informatik IVTechnical University of DortmundDortmundGermany
  2. 2.Department of Computer EngineeringBilkent UniversityBilkent, AnkaraTurkey

Personalised recommendations