TCU: A Multi-Objective Hardware Thread Mapping Unit for HPC Clusters

  • Ravi Kumar Pujari
  • Thomas Wild
  • Andreas Herkersdorf
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9697)


Meeting multiple, partially orthogonal optimization targets during thread scheduling on HPC and manycore platforms simultaneously, like maximizing CPU performance, meeting deadlines of time critical tasks, minimizing power and securing thermal resilience, is a major challenge because of associated scalability and thread management overhead. We tackle these challenges by introducing the Thread Control Unit (TCU), a configurable, low-latency, low-overhead hardware thread mapper in compute nodes of an HPC cluster. The TCU takes various sensor information into account and can map threads to 4–16 CPUs of a compute node within a small and bounded number of clock cycles in round-robin, single- or multi-objective manner. The TCU design can consider not just load balancing or performance criteria but also physical constraints like temperature limits, power budgets and reliability aspects. Evaluations of different mapping policies show that multi-objective thread mapping provides about 10 to 40 % less mapping latency for periodic workloads compared to single-objective or round-robin policies. For bursty workloads under high load conditions, a 20 % reduction is achieved.

The TCU macro has a mere 9 % hardware area overhead and achieves more than 150 k thread mappings per second on an FPGA prototype of a RISC quad-core compute node operating at moderate 50 MHz. A 45 nm technology ASIC realization of TCU can operate well above 1 GHz and support up to 3.15 million thread mappings per second.


Hardware scheduler Thread mapper Multi-objective MPSoC HPC Manycore systems 



This work was supported by the German Research Foundation (DFG) as part of the Transregional Collaborative Research Center “Invasive Computing” (SFB/TR 89).


  1. 1.
    Association, I.T.: InfiniBand Architecture Specification, Release 1.0 (2000).
  2. 2.
    Borkar, S.: Designing reliable systems from unreliable components: the challenges of transistor variability and degradation. IEEE Micro 25(6), 10–16 (2005). CrossRefGoogle Scholar
  3. 3.
    Colmenares, J., Eads, G., Hofmeyr, S., Bird, S., Moreto, M., Chou, D., Gluzman, B., Roman, E., Bartolini, D., Mor, N., Asanovic, K., Kubiatowicz, J.: Tessellation: refactoring the OS around explicit resource containers with continuous adaptation. In: 2013 50th ACM/EDAC/IEEE Design Automation Conference (DAC), pp. 1–10, May 2013Google Scholar
  4. 4.
    Coskun, A., Rosing, T., Whisnant, K., Gross, K.: Static and dynamic temperature-aware scheduling for multiprocessor SoCs. IEEE Trans. Very Large Scale Integr. VLSI Syst. 16(9), 1127–1140 (2008)CrossRefGoogle Scholar
  5. 5.
    Dagum, L., Menon, R.: OpenMP: an industry standard API for shared-memory programming. IEEE Comput. Sci. Eng. 5(1), 46–55 (1998). CrossRefGoogle Scholar
  6. 6.
    Esmaeilzadeh, H., Blem, E., St. Amant, R., Sankaralingam, K., Burger, D.: Dark silicon and the end of multicore scaling. In: Proceedings of the 38th Annual International Symposium on Computer Architecture, pp. 365–376, ISCA 2011. ACM, New York, NY, USA (2011).
  7. 7.
    Frigo, M., Leiserson, C.E., Randall, K.H.: The implementation of the Cilk-5 multithreaded language. SIGPLAN Not. 33(5), 212–223 (1998). CrossRefGoogle Scholar
  8. 8.
    Henkel, J., Herkersdorf, A., Bauer, L., Wild, T., Hubner, M., Pujari, R., Grudnitsky, A., Heisswolf, J., Zaib, A., Vogel, B., Lari, V., Kobbe, S.: Invasive manycore architectures. In: 2012 17th Asia and South Pacific Design Automation Conference (ASP-DAC), pp. 193–200, January 2012Google Scholar
  9. 9.
    Howard, J., Dighe, S., Hoskote, Y., Vangal, S., Finan, D., Ruhl, G., Jenkins, D., Wilson, H., Borkar, N., Schrom, G., Pailet, F., Jain, S., Jacob, T., Yada, S., Marella, S., Salihundam, P., Erraguntla, V., Konow, M., Riepen, M., Droege, G., Lindemann, J., Gries, M., Apel, T., Henriss, K., Lund-Larsen, T., Steibl, S., Borkar, S., De, V., Van Der Wijngaart, R., Mattson, T.: A 48-core IA-32 message-passing processor with DVFS in 45 nm CMOS. In: 2010 IEEE International Solid-State Circuits Conference Digest of Technical Papers (ISSCC), pp. 108–109, February 2010Google Scholar
  10. 10.
    Kumar, S., Hughes, C.J., Nguyen, A.: Carbon: architectural support for fine-grained parallelism on chip multiprocessors. SIGARCH Comput. Archit. News 35(2), 162–173 (2007). CrossRefGoogle Scholar
  11. 11.
    Li, Y., Skadron, K., Brooks, D., Hu, Z.: Performance, energy, and thermal considerations for SMT and CMP architectures. In: 11th International Symposium on High-Performance Computer Architecture, HPCA-11 2005, pp. 71–82, February 2005Google Scholar
  12. 12.
    Pujari, R.K., Wild, T., Herkersdorf, A., Vogel, B., Henkel, J.: Hardware assisted thread assignment for RISC based MPSoCs in invasive computing. In: 2011 13th International Symposium on Integrated Circuits (ISIC), pp. 106–109, December 2011.
  13. 13.
    Reinders, J.: Intel Threading Building Blocks, 1st edn. O’Reilly & Associates Inc., Sebastopol (2007)Google Scholar
  14. 14.
    Virding, R., Wikström, C., Williams, M.: Concurrent Programming in ERLANG, 2nd edn. Prentice Hall International (UK) Ltd., Hertfordshire (1996)zbMATHGoogle Scholar
  15. 15.
    Wentzlaff, D., Griffin, P., Hoffmann, H., Bao, L., Edwards, B., Ramey, C., Mattina, M., Miao, C.C., Brown III, J.F., Agarwal, A.: On-chip interconnection architecture of the tile processor. IEEE Micro 27(5), 15–31 (2007). CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2016

Authors and Affiliations

  • Ravi Kumar Pujari
    • 1
  • Thomas Wild
    • 1
  • Andreas Herkersdorf
    • 1
  1. 1.Institute for Integrated SystemsTechnische Universität MünchenMunichGermany

Personalised recommendations