Abstract
Speedup in parallel execution on SIMD architecture according to Amdahl’s Law is finite. Further more, according to Gustrafson’s Law, there are algorithms that can achieve almost linear speedup. However, researchers have found some examples of superlinear speedup for certain types of algorithms executed on specific multiprocessors.
In this paper we achieved superlinear speedup for GPU devices, which are also categorized as SIMD. We implement a structure persistent algorithm which efficiently exploits the shared cache memory and avoids cache misses as much as possible. Our theoretical analysis and experimental results show the existence of superlinear speedup for algorithms that run on existing GPU device.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
Preview
Unable to display preview. Download preview PDF.
References
Amdahl, G.M.: Validity of the single-processor approach to achieving large scale computing capabilities. In: AFIPS Conference Proceedings, April 18-20, vol. 30, pp. 483–485. AFIPS Press, Reston (1967)
Anderson, E., Bai, Z., Bischof, C., Blackford, S., Demmel, J., Dongarra, J., Croz, J.D., Greenbaum, A., Hammarling, S., McKenney, A., Sorensen, D.: LAPACK Users’ Guide. Soc. for Ind. and Appl. Math., 3rd edn., PA (1999)
Bell, N., Garland, M.: The impact of cache misses on the performance of matrix product algorithms on multicore platforms. Research Report NVR-2008-004 (December 2008), http://hal.inria.fr/inria-00537822/en/
Blackford, L.S., et al.: An updated set of basic linear algebra subprograms (blas). ACM Trans. Math. Softw. 28(2), 135–151 (2002)
Clarke, D., Lastovetsky, A., Rychkov, V.: Column-based matrix partitioning for parallel matrix multiplication on heterogeneous processors based on functional performance models. In: Alexander, M., D’Ambra, P., Belloum, A., Bosilca, G., Cannataro, M., Danelutto, M., Di Martino, B., Gerndt, M., Jeannot, E., Namyst, R., Roman, J., Scott, S.L., Traff, J.L., Vallée, G., Weidendorfer, J. (eds.) Euro-Par 2011, Part I. LNCS, vol. 7155, pp. 450–459. Springer, Heidelberg (2012)
DeFlumere, A., Lastovetsky, A., Becker, B.: Partitioning for parallel matrix-matrix multiplication with heterogeneous processors: The optimal solution. In: HCW 2012. IEEE Computer Society, Shanghai (2012)
Glaskowsky, P.: Nvidias fermi: the first complete gpu computing architecture. Tech. rep., NVIDIA (2009) (white Paper)
Grama, A., Karypis, G., Kumar, V., Gupta, A.: Introduction to Parallel Computing, 2nd edn. Addison-Wesley (January 2003)
Gusev, M., Ristov, S.: Superlinear speedup in windows azure cloud. Tech. Rep. IIT:06-12, University Ss Cyril and Methodius, Skopje, Macedonia, Faculty of Information Sciences and Computer Engineering (July 2012)
Gustafson, J.L.: Reevaluating amdahl’s law. ACM 31(5), 532–533 (1988)
Jacquelin, M., Marchal, L., Robert, Y.: The impact of cache misses on the performance of matrix product algorithms on multicore platforms. Research Report RR-7456, INRIA (November 2010), http://hal.inria.fr/inria-00537822/en/
Kirk, D., Hwu, W.M.: Programming Massively Parallel Processors: A Hands-on Approach, 1st edn. Morgan Kaufmann Publishers Inc., USA (2010)
Lindholm, E., Nickolls, J., Oberman, S., Montrym, J.: Nvidia tesla: A unified graphics and computing architecture. IEEE Micro 28(2), 39–55 (2008)
Nath, R., Tomov, S., Dongarra, J.: An improved magma gemm for fermi graphics processing units. Int. J. High Perf. C. App. 24(4), 511–515 (2010)
Nickolls, J., Dally, W.: The gpu computing era. IEEE Micro 30(2), 56–69 (2010)
NVIDIA: Cuda programming guide (Auguest 2012), http://developer.download.nvidia.com/compute/DevZone/docs/html/C/doc/CUDA_C_Programming_Guide.pdf/
NVIDIA: Next generation cuda compute architecture: Kepler gk110 (2012)
Playne, D.P., Hawick, K.A.: Comparison of gpu architectures for asynchronous communication with finite-differencing applications. Concurrency and Computation: Practice and Experience 24(1), 73–83 (2012)
Ristov, S., Gusev, M.: Superlinear speedup for matrix multiplication. In: Proceedings of the ITI 2012 34th International Conference on Information Technology Interfaces, pp. 499–504 (2012)
Ristov, S., Gusev, M., Kostoska, M., Kjiroski, K.: Virtualized environments in cloud can have superlinear speedup. In: ACM Proceedings of 5th Balkan Conference of Informatics, BCI 2012 (2012)
Volkov, V., Demmel, J.W.: Benchmarking gpus to tune dense linear algebra. In: Proceedings of the 2008 ACM/IEEE Conference on Supercomputing, SC 2008, pp. 31:1–31:11. IEEE Press, Piscataway (2008)
Wittenbrink, C.M., Kilgariff, E., Prabhu, A.: Fermi gf100 gpu architecture. IEEE Micro 31(2), 50–59 (2011)
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2013 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Djinevski, L., Ristov, S., Gusev, M. (2013). Superlinear Speedup for Matrix Multiplication in GPU Devices. In: Markovski, S., Gusev, M. (eds) ICT Innovations 2012. ICT Innovations 2012. Advances in Intelligent Systems and Computing, vol 207. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-642-37169-1_28
Download citation
DOI: https://doi.org/10.1007/978-3-642-37169-1_28
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-642-37168-4
Online ISBN: 978-3-642-37169-1
eBook Packages: EngineeringEngineering (R0)