A Low Noise Unikernel for Extrem-Scale Systems

  • Stefan Lankes
  • Simon Pickartz
  • Jens Breitbart
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10172)


We expect that the size and the complexity of future supercomputers will increase on their path to exascale systems and beyond. Therefore, system software has to adapt to the complexity of these systems to simplify the development of scalable applications. In cloud environments, the activity of a virtual machine on a neighboring core may decrease performance due to issues such as cache contamination (noise neighbor problem). In this paper, we present the unikernel operating system HermitCore coming up with predictable runtimes, which improves the scalability. It extends the multi-kernel approach with unikernel features while providing better programmability and scalability for hierarchical systems. In addition, the same binary can be used to run as unikernel within virtual machines. By using a unikernel, the memory footprint of Virtual Machines (VMs) is decreased, which reduces the pressure on the cache system and improves the overall performance. We prove the predictable runtime of the design via micro benchmarks by taking the example of HermitCore on the upcoming manycore architecture Knights Landing.


Virtual Machine Message Passing Interface System Call Virtual Cluster NUMA Node 
These keywords were added by machine and not by the authors. This process is experimental and the keywords may be updated as the learning algorithm improves.


  1. 1.
    Hoefler, T., Schneider, T., Lumsdaine, A.: Characterizing the influence of system noise on large-scale applications by simulation. In: SC (2010)Google Scholar
  2. 2.
    Tsafrir, D., Etsion, Y., Feitelson, D.G., Kirkpatrick, S.: System noise, OS clock ticks, and fine-grained parallel applications. ACM (2005)Google Scholar
  3. 3.
    Madhavapeddy, A., Mortier, R., Rotsos, C., Scott, D., Singh, B., Gazagnaire, T., Smith, S., Hand, S., Crowcroft, J.: Unikernels: library operating systems for the cloud. In: Proceedings of the Eighteenth International Conference on Architectural Support for Programming Languages and Operating Systems (ASPLOS 2013), pp. 461–472. ACM, New York (2013)Google Scholar
  4. 4.
    Lankes, S., Pickartz, S., Breitbart, J.: HermitCore: a unikernel for extreme scale computing. In: Proceedings of the 6th International Workshop on Runtime and Operating Systems for Supercomputers (ROSS 2016), pp. 4:1–4:8. ACM, New York (2016)Google Scholar
  5. 5.
    Oral, S., Wang, F., Dillow, D.A., Miller, R., Shipman, G.M., Maxwell, D., Henseler, D., Becklehimer, J., Larkin, J.: Reducing application runtime variability on Jaguar XT5. In: Proceedings of Cray User Group (CUG 2010) (2010)Google Scholar
  6. 6.
    Kelly, S.M., Brightwell, R.: Software Architecture of the Light Weight Kernel, Catamount. In: In Cray User Group, pp. 16–19 (2005)Google Scholar
  7. 7.
    Yoshii, K., Iskra, K., Naik, H., Beckman, P., Broekema, P.C.: Characterizing the performance of big memory on Blue Gene linux. In: Proceedings of the 2nd International Workshop on Parallel Programming Models and Systems Software for High-End Computing (P2S2 2009), pp. 65–72 (2009)Google Scholar
  8. 8.
    Giampapa, M., Gooding, T., Inglett, T., Wisniewski, R.W.: Experiences with a Lightweight supercomputer kernel: lessons learned from Blue Gene’s CNK. In: 2010 International Conference for High Performance Computing, Networking, Storage and Analysis (SC), pp. 1–10, November 2010Google Scholar
  9. 9.
    Park, Y., Van Hensbergen, E., Hillenbrand, M., Inglett, T., Rosenburg, B.S., Ryu, K.D., Wisniewski, R.W.: FusedOS: fusing LWK Performance with FWK functionality in a heterogeneous environment. In: SBAC-PAD, pp. 211–218 (2012)Google Scholar
  10. 10.
    Wisniewski, R.W., Inglett, T., Keppel, P., Murty, R., Riesen, R.: mOS: an architecture for extreme-scale operating systems. In: Proceedings of the 4th International Workshop on Runtime and Operating Systems for Supercomputers (ROSS 2014), New York, pp. 1–8. ACM Request Permissions, June 2014Google Scholar
  11. 11.
    Shimosawa, T., Gerofi, B., Takagi, M., Nakamura, G., Shirasawa, T., Saeki, Y., Shimizu, M., Hori, A., Ishikawa, Y.: Interface for heterogeneous kernels: a framework to enable hybrid OS designs targeting high performance computing on manycore architectures. In: 2014 21st International Conference on High Performance Computing (HiPC), pp. 1–10 (2014)Google Scholar
  12. 12.
    Jeffers, J., Reinders, J., Sodani, A.: Intel Xeon Phi Coprocessor High Performance Programming, 2nd edn. Morgan Kaufmann Publishers Inc., San Francisco (2016)Google Scholar
  13. 13.
    Pickartz, S., Lankes, S., Monti, A., Clauss, C., Breitbart, J.: Application migration in HPC - a driver of the exascale era? In: 2016 International Conference on High Performance Computing Simulation (HPCS), pp. 318–325, July 2016Google Scholar
  14. 14.
    Clauss, C., Lankes, S., Reble, P., Galowicz, J., Pickartz, S., Bemmerl, T.: iRCCE: a non-blocking communication extension to the RCCE communication library for the Intel single-chip cloud computer - version 2.0 iRCCE FLAIR. Technical report, Chair for Operating Systems, RWTH Aachen University Users’ Guide and API Manual (2013)Google Scholar
  15. 15.
    Clauss, C., Lankes, S., Reble, P., Bemmerl, T.: New system software for parallel programming models on the Intel SCC many-core processor. Concurr. Comput. Pract. Exp. 27(9), 2235–2259 (2015)CrossRefGoogle Scholar
  16. 16.
    Mattson, T.G., van der Wijngaart, R.F., Riepen, M., Lehnig, T., Brett, P., Haas, W., Kennedy, P., Howard, J., Vangal, S., Borkar, N., Ruhl, G., Dighe, S.: The 48-core SCC processor: the programmer’s view. In: 2010 International Conference for High Performance Computing, Networking, Storage and Analysis (SC), pp. 1–11, November 2010Google Scholar
  17. 17.
    Clauss, C., Lankes, S., Reble, P., Bemmerl, T.: Recent advances and future prospects in iRCCE and SCC-MPICH. In: Proceedings of the 3rd Symposium of the Many-core Applications Research Community (MARC), Ettlingen. KIT Scientific Publishing Poster Abstract, July 2011Google Scholar
  18. 18.
    Regehr, J.: Inferring scheduling behavior with hourglass. In: Proceedings of the USENIX Annual Technical Conference, FREENIX Track, Monterey, pp. 143–156, June 2002Google Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.E.ON Energy Research Center, Institute for Automation of Complex Power SystemsRWTH Aachen UniversityAachenGermany
  2. 2.Bosch Chassis Systems ControlStuttgartGermany

Personalised recommendations