Abstract
Modern GPU applications often need to synchronize thousands of threads for correctness. The warp scheduling algorithm, memory coalescing and memory scheduling algorithm etc. may cause different execution schedules for the warps in the same Cooperative Thread Array (CTA). So when synchronization happens, waiting is required and synchronization cost is introduced. In this paper, we examine the synchronization cost of multiple GPU applications in three metrics. With synchronization information in CTA boundary, the warps still running in the CTA can know their lagging degrees. We promote the warp scheduling priority and memory scheduling priority for these warps and their memory requests to accelerate the execution speed of these warps, making warp scheduler and memory scheduler synchronization-aware. The experiments show that the synchronization-aware warp scheduling algorithm reduces the synchronization metrics to 86.66 %, 92.12 % and 85.63 % compared with the baseline and improves the GPU performance by 5.76 %. For memory intensive benchmarks, the synchronization-aware memory scheduling algorithm improves the system performane by 6.81 %. The combination of these two schedulers can further improve the GPU performance by 6.46 %.
Access this chapter
Tax calculation will be finalised at checkout
Purchases are for personal use only
References
Ausavarungnirun, R., Chang, K.K.W., Subramanian, L., Loh, G.H., Mutlu, O.: Staged memory scheduling: Achieving high performance and scalability in heterogeneous systems. SIGARCH Comput. Archit. News 40(3), 416–427 (2012)
Lakshminarayana, B.N., Lee, J., Kim, H., Shin, J.: Dram scheduling policy for GPGPU architectures based on a potential function. IEEE Comput. Archit. Lett. 11(2), 33–36 (2012)
Bakhoda, A., Yuan, G., Fung, W., Wong, H., Aamodt, T.: Analyzing CUDA workloads using a detailed GPU simulator. In: 2009 IEEE International Symposium on Performance Analysis of Systems and Software, ISPASS 2009, pp. 163–174, April 2009
Chatterjee, N., O’Connor, M., Loh, G.H., Jayasena, N., Balasubramonian, R.: Managing dram latency divergence in irregular GPGPU applications. In: Proceedings of the International Conference for High Performance Computing, Networking, Storage and Analysis, SC 2014, pp. 128–139. IEEE Press, Piscataway (2014)
Che, S., Boyer, M., Meng, J., Tarjan, D., Sheaffer, J., Lee, S.H., Skadron, K.: Rodinia: a benchmark suite for heterogeneous computing. In: 2009 IEEE International Symposium on Workload Characterization, IISWC 2009, pp. 44–54, October 2009
Chen, J., Tao, X., Yang, Z., Peir, J.K., Li, X., Lu, S.L.: Guided region-based GPU scheduling: utilizing multi-thread parallelism to hide memory latency. In: 2013 IEEE 27th International Symposium on Parallel Distributed Processing (IPDPS), pp. 441–451, May 2013
Gebhart, M., Johnson, D.R., Tarjan, D., Keckler, S.W., Dally, W.J., Lindholm, E., Skadron, K.: Energy-efficient mechanisms for managing thread context in throughput processors. SIGARCH Comput. Archit. News 39(3), 235–246 (2011)
He, B., Fang, W., Luo, Q., Govindaraju, N.K., Wang, T.: Mars: a mapreduce framework on graphics processors. In: Proceedings of the 17th International Conference on Parallel Architectures and Compilation Techniques, PACT 2008, pp. 260–269. ACM, New York (2008)
hynix: “hynix gddr5 sgram part h5gq1h24afr” (2009). www.hynix.com/datasheet/pdf/graphics/H5GQ1H24AFR(Rev1.0).pdf
Jablin, J.A., Jablin, T.B., Mutlu, O., Herlihy, M.: Warp-aware trace scheduling for GPUs. In: Proceedings of the 23rd International Conference on Parallel Architectures and Compilation, PACT 2014, pp. 163–174. ACM, New York (2014)
Jeong, M.K., Erez, M., Sudanthi, C., Paver, N.: A QoS-aware memory controller for dynamically balancing GPU and CPU bandwidth use in an MPSoC. In: Proceedings of the 49th Annual Design Automation Conference, DAC 2012, pp. 850–855. ACM, New York (2012)
Jog, A., Kayiran, O., Chidambaram Nachiappan, N., Mishra, A.K., Kandemir, M.T., Mutlu, O., Iyer, R., Das, C.R.: Owl: cooperative thread array aware scheduling techniques for improving GPGPU performance. SIGPLAN Not. 48(4), 395–406 (2013)
Jog, A., Kayiran, O., Mishra, A.K., Kandemir, M.T., Mutlu, O., Iyer, R., Das, C.R.: Orchestrated scheduling and prefetching for GPGPUS. In: Proceedings of the 40th Annual International Symposium on Computer Architecture, ISCA 2013, pp. 332–343. ACM, New York (2013)
Kayiran, O., Jog, A., Kandemir, M., Das, C.: Neither more nor less: optimizing thread-level parallelism for GPGPUS. In: 2013 22nd International Conference on Parallel Architectures and Compilation Techniques (PACT), pp. 157–166, September 2013
Keckler, S., Dally, W., Khailany, B., Garland, M., Glasco, D.: Gpus and the future of parallel computing. IEEE Micro 31(5), 7–17 (2011)
Kuo, H.K., Yen, T.K., Lai, B.C., Jou, J.Y.: Cache capacity aware thread scheduling for irregular memory access on many-core GPGPUs. In: 2013 18th Asia and South Pacific Design Automation Conference (ASP-DAC), pp. 338–343, January 2013
Lee, M., Song, S., Moon, J., Kim, J., Seo, W., Cho, Y., Ryu, S.: Improving GPGPU resource utilization through alternative thread block scheduling. In: 2014 IEEE 20th International Symposium on High Performance Computer Architecture (HPCA), pp. 260–271, February 2014
Lee, S.Y., Wu, C.J.: Caws: criticality-aware warp scheduling for GPGPU workloads. In: Proceedings of the 23rd International Conference on Parallel Architectures and Compilation, PACT 2014, pp. 175–186. ACM, New York (2014)
Lakshminarayana, N.B., Kim, H.: Workshop on Language, Compiler, and Architecture Support for GPGPU (2010)
Narasiman, V., Shebanow, M., Lee, C.J., Miftakhutdinov, R., Mutlu, O., Patt, Y.N.: Improving GPU performance via large warps and two-level warp scheduling. In: Proceedings of the 44th Annual IEEE/ACM International Symposium on Microarchitecture, MICRO-44, pp. 308–317. ACM, New York (2011)
NVIDIA: “nvidia cuda c programming guide v4.2” (2012). docs.nvidia.com/cuda/
Rixner, S., Dally, W.J., Kapasi, U.J., Mattson, P., Owens, J.D.: Memory access scheduling. SIGARCH Comput. Archit. News 28(2), 128–138 (2000)
Rogers, T.G., O’Connor, M., Aamodt, T.M.: Cache-conscious wavefront scheduling. In: Proceedings of the 2012 45th Annual IEEE/ACM International Symposium on Microarchitecture, MICRO-45, pp. 72–83. IEEE Computer Society, Washington (2012)
Stone, J.E., Gohara, D., Shi, G.: OpenCL: a parallel programming standard for heterogeneous computing systems. IEEE Des. Test 12(3), 66–73 (2010)
Robinson, T., Zuravleff, W.: Controller for a synchronous dram that maximizes throughput by allowing memory requests and commands to be issued out of order (1997). Google Patents
Acknowledgements
This paper is supported by the National Natural Science Foundation of China under Grant No. 61379035, the National Natural Science Foundation of Zhejiang Province under Grant No. LY14F020005, Open Fund of Mobile Network Application Technology Key Laboratory of Zhejiang Province, Innovation Group of New Generation of Mobile Internet Software and Services of Zhejiang Province.
Author information
Authors and Affiliations
Corresponding author
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2015 Springer International Publishing Switzerland
About this paper
Cite this paper
Ma, J., Chen, T., Wu, M. (2015). Making GPU Warp Scheduler and Memory Scheduler Synchronization-Aware. In: Qiang, W., Zheng, X., Hsu, CH. (eds) Cloud Computing and Big Data. CloudCom-Asia 2015. Lecture Notes in Computer Science(), vol 9106. Springer, Cham. https://doi.org/10.1007/978-3-319-28430-9_12
Download citation
DOI: https://doi.org/10.1007/978-3-319-28430-9_12
Published:
Publisher Name: Springer, Cham
Print ISBN: 978-3-319-28429-3
Online ISBN: 978-3-319-28430-9
eBook Packages: Computer ScienceComputer Science (R0)