Skip to main content
Log in

Moving from exascale to zettascale computing: challenges and techniques

  • Perspective
  • Published:
Frontiers of Information Technology & Electronic Engineering Aims and scope Submit manuscript


High-performance computing (HPC) is essential for both traditional and emerging scientific fields, enabling scientific activities to make progress. With the development of high-performance computing, it is foreseeable that exascale computing will be put into practice around 2020. As Moore’s law approaches its limit, high-performance computing will face severe challenges when moving from exascale to zettascale, making the next 10 years after 2020 a vital period to develop key HPC techniques. In this study, we discuss the challenges of enabling zettascale computing with respect to both hardware and software. We then present a perspective of future HPC technology evolution and revolution, leading to our main recommendations in support of zettascale computing in the coming future.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Similar content being viewed by others


  • Ábrahám E, Bekas C, Brandic I, et al., 2015. Preparing HPC applications for exascale: challenges and recommendations. 18th Int Conf on Network–Based Information Systems, p.401–406.

    Google Scholar 

  • Asch M, Moore T, Badia R, et al., 2018. Big data and extreme–scale computing: pathways to convergence—toward a shaping strategy for a future software and data ecosystem for scientific inquiry. Int J High Perform Comput Appl, 32(4):435–479.

    Article  Google Scholar 

  • Cavin RK, Lugli P, Zhirnov VV, 2012. Science and engineering beyond Moore’s law. Proc IEEE, 100:1720–1749.

    Article  Google Scholar 

  • Chong FT, Franklin D, Martonosi M, 2017. Programming languages and compiler design for realistic quantum hardware. Nature, 549(7671):180–187.

    Article  Google Scholar 

  • Diaz J, Muãoz–Caro C, Nião A, 2012. A survey of parallel programming models and tools in the multi and manycore era. IEEE Trans Parall Distrib Syst, 23(8):1369–1386.

    Article  Google Scholar 

  • Fang J, Varbanescu AL, Sips HJ, 2011. A comprehensive performance comparison of CUDA and OpenCL. Int Conf on Parallel Processing, p.216–225.

    Book  Google Scholar 

  • Glosli JN, Richards DF, Caspersen KJ, et al., 2007. Extending stability beyond CPU millennium: a micronscale atomistic simulation of Kelvin–Helmholtz instability. ACM/IEEE Conf on Supercomputing, p.1–11.

    Book  Google Scholar 

  • Jacob P, Zia A, Erdogan O, et al., 2009. Mitigating memory wall effects in high–clock–rate and multicore CMOS 3D processor memory stacks. Proc IEEE, 97(1):108–122.

    Article  Google Scholar 

  • Jeddeloh J, Keeth B, 2012. Hybrid memory cube new DRAM architecture increases density and performance. Int Symp on VLSI Technology, p.87–88.

    Book  Google Scholar 

  • Keeton K, 2015. The machine: an architecture for memorycentric computing. 5th Int Workshop on Runtime and Operating Systems for Supercomputers, p.1.

    Book  Google Scholar 

  • Kim NS, Chen D, Xiong J, et al., 2017. Heterogeneous computing meets near–memory acceleration and high–level synthesis in the post–Moore era. IEEE Micro, 37(4):10–18.

    Article  Google Scholar 

  • Kolli A, Rosen J, Diestelhorst S, et al., 2016. Delegated persist ordering. 49th Annual IEEE/ACM Int Symp on Microarchitecture, p.1–13.

    Book  Google Scholar 

  • Lucas R, Ang J, Bergman K, et al., 2014. Top10 exascale research challenges. Department of Energy Office of Science.

    Google Scholar 

  • Mishra S, Chaudhary NK, Singh K, 2013. Overview of optical interconnect technology. Int J Sci Eng Res, 3(4):364–374.

    Google Scholar 

  • Rumley S, Nikolova D, Hendry R, et al., 2015. Silicon photonics for exascale systems. J Lightw Technol, 33(3):547–562.

    Article  Google Scholar 

  • Schroeder B, Gibson GA, 2007. Understanding failures in petascale computers. J Phys, 78(1):012022.–6596/78/1.012022

    Google Scholar 

  • Shen J, Fang J, Sips HJ, et al., 2013. An application–centric evaluation of OpenCL on multi–core CPUs. Parall Comput, 39(12):834–850.

    Article  Google Scholar 

  • Vinaik B, Puri R, 2015. Oracle’s Sonoma processor: advanced low–cost SPARC processor for enterprise workloads. IEEE Hot Chips.27 Symp, p.1–23.

    Google Scholar 

  • Waldrop MM, 2016. The chips are down for Moore’s law. Nature, 530(7589):144–147.

    Article  Google Scholar 

  • Wilkes MV, 1995. The memory wall and the CMOS endpoint. SIGARCH Comput Archit News, 23(4):4–6.

    Article  MathSciNet  Google Scholar 

  • Wulf WA, McKee SA, 1995. Hitting the memory wall: implications of the obvious. SIGARCH Comput Archit News, 23(1):20–24.

    Article  Google Scholar 

  • Xu W, Lu Y, Li Q, et al., 2014. Hybrid hierarchy storage system in Milkyway–2 supercomputer. Front Comput Sci, 8(3):367–377.–014–3499.6

    Article  MathSciNet  Google Scholar 

  • Xu Z, Chi X, Xiao N, 2016. High–performance computing environment: a review of twenty years of experiments in China. Nat Sci Rev, 3(1):36–48.

    Article  Google Scholar 

  • Zhang P, Fang JB, Tang T, et al., 2018. Auto–tuning streamed applications on Intel Xeon Phi. IEEE Int Parallel and Distributed Processing Symp, p.515–525.

    Book  Google Scholar 

Download references

Author information

Authors and Affiliations


Corresponding author

Correspondence to Kai Lu.

Additional information

Project supported by the National Key Technology R&D Program of China (No. 2016YFB0200401)

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Liao, Xk., Lu, K., Yang, Cq. et al. Moving from exascale to zettascale computing: challenges and techniques. Frontiers Inf Technol Electronic Eng 19, 1236–1244 (2018).

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:

Key words

CLC number