Skip to main content

Joint Supercomputer Center of the Russian Academy of Sciences: Present and Future


Joint Supercomputer Center of the Russian Academy of Sciences (JSCC RAS) is the leading supercomputer center for the Russian Academy of Sciences. JSCC RAS uses new technology which is particularly based on native solutions that provide ultra-high dense layout of nodes in the computational field and energy efficiency. JSCC RAS offers users the latest architecture of computing nodes and communications infrastructure. The center has advanced energy-efficient “hot” and “cold” water-cooling systems and a wide range of engineering equipment, a system for monitoring and managing computational resources serving a distributed network of scientific supercomputer centers, a domestic system for scheduling and managing jobs, software development and maintenance tools, application packages for high-performance computing. The paper presents the analysis of the current state of JSCC RAS, and a review of its development plans in the main scientific and practical directions.

This is a preview of subscription content, access via your institution.


  1. V. K. Levin, Domestic supercomputers MVS family. Accessed Apr. 19, 2019.

    Google Scholar 

  2. V. Korneev, “The development of system software for parallel supercomputers,” Lect. Notes Comput. Sci. 2823, 46–53 (2003).

    Article  Google Scholar 

  3. W. P. Turner, J. H. Seader, and K. G. Brill, Industry Standard Tier Classifications Define Site Infrastructure Performance (2001). Accessed Apr. 19, 2019.

    Google Scholar 

  4. O. Aladyshev, A. Baranov, A. Ovsyannikov, G. Balayan, and V. Sinitsin, “Methods and tools of combining jobs flows from cloud platforms and supercomputer resource managers,” Program. Prod. Sist. Algoritmy, No. 4, 54–63 (2018).

    Google Scholar 

  5. A. Reuther et al., “Scalable system scheduling for HPC and big data,” J. Parallel Distrib. Comput. 111, 76–92 (2018).

    Article  Google Scholar 

  6. A. V. Baranov, A. V. Kiselev, V. V. Starichkov, R. P. onin, and D. S. Lyakhovets, “Comparison of batch processing systems in terms of organizing an industrial account,” in Proceedings of the International Supercomputer Conference, September 17–22, 2012, Novorossiysk (Mosk. Gos. Univ., Moscow, 2012), 506–508.

    Google Scholar 

  7. O. Aladyshev and S. Leshchev, “Features of a network of data storages for supercomputer center,” Proc. Sci. Res. Inst. System Anal. RAS 7(4), 151–156 (2017).

    Google Scholar 

  8. J. Yuventi and R. Mehdizadeh, “A critical analysis of Power Usage Effectiveness and its use in communicating data center energy consumption,” Energy Buildings 64, 90–94 (2013).

    Article  Google Scholar 

  9. S. Moss, Getting into Hot Water (2018). Accessed Apr. 12, 2019.

    Google Scholar 

  10. I. Odyntsov, E. Tutlyaeva, and A. Moskovsiy, “Towards an exascale supercomputer infrastructure,” in Scientific Services and Internet, Proceedings of the 19th All-Russian Scientific Conference, Novorossiysk, Russia, Sept. 18–23, 2017 (2017), pp. 377–378.

    Google Scholar 

  11. Uptime Institute 8th Annual Data Center Survey Shows Need for Change with Rise of Complex Digital Infrastructure. Accessed Apr. 12, 2019.

  12. Green ICT: Sustainable Communications and Information Technology. Is PUE Still Above 2.0 for Most Data Centers? 12, 2019.

    Google Scholar 

  13. M. S. Birrittella et al., “Intel R Omni-path architecture: enabling scalable, high performance fabrics,” in Proceedings of the IEEE 23rd Annual Symposium on High-Performance Interconnects, Santa Clara, CA, 2015, pp. 1–9.

  14. A. Baranov et al., “Effective usage of the link between geographically distributed supercomputer centers,” Tr. Inst. Sist. Anal. RAN 7 (4), 137–142 (2017).

    Google Scholar 

  15. A. Baranov, E. Kiselev, and D. Chernyaev, “Experimental comparison of performance and fault tolerance of software packages Pyramid, X-COM, and BOINC,” Commun. Comput. Inform. Sci. 687, 279–290 (2016).

    Article  Google Scholar 

  16. A. V. Baranov and D. S. Nikolaev, “The use of container virtualization in the high-performance computing,” Program. Prod. Sist. Teor. Prilozh. 7(1 (28)), 117–134 (2016).

    Google Scholar 

  17. A. V. Baranov, G. I. Savin, B. M. Shabanov, et al., “Methods of jobs containerization for supercomputer workload managers,” Lobachevskii J. Math. 40 (5), 525–534 (2019).

    Article  MathSciNet  Google Scholar 

  18. A. A. Rybakov, “Computational workload distribution between supercomputer nodes for fluid dynamics calculations with grid fragmentation using,” Sovrev Inform. Tekhnol. IT-Obrazov 12, 101–107 (2016).

    Google Scholar 

  19. A. V. Baranov, E. A. Kiselev, E. S. Kormilitsin, V. F Ogaryshev, and P. N. Telegin, “Modification of the statistic subsystem of the Joint Supercomputer Center of the Russian Academy of Sciences,” Tr. Inst. Sist. Anal. RAN 8 (4), 136–144 (2018).

    Google Scholar 

  20. B. M. Shabanov et al., “The jobs management sstem for the distributed network of the supercomputer centers,” Tr. Inst. Sist. Anal. RAN 8 (6), 65–73 (2018).

    Google Scholar 

  21. A. V. Baranov and D. S. Lyakhovets, “Comparison of the quality of job scheduling in workload management systems SLURM and SUPPZ,” in Scientific Services & Internet: All Facets of Parallelism, Proceedings of the International Supercomputing Conference, Novorossiysk, Russia, Sept. 23–28, 2013 (2013), pp. 410–414.

    Google Scholar 

  22. L. A. Benderskii, D. A. Lyubimov, A. O. Chestnykh, B. M. Shabanov, and A. A. Rubakov, “The use of the RANS/ILES method to study the influence of coflow wind on the flow in a hot, nonisobaric, supersonic airdrome jet during its interaction with the jet blast deflector,” High Temp. 56, 247–254 (2018).

    Article  Google Scholar 

  23. W. Kramer, “Top500 versus sustained performance: the top problems with the top500 list—and what to do about them,” in Proceedings of the 21st International Conference on Parallel Architectures and Compilation Techniques PACT’ 12 (ACM, New York, NY, 2012), pp. 223–230.

    Chapter  Google Scholar 

  24. N. Dikarev, B. Shabanov, and A. Shmelev, “Vector data flow processor and shared-memory multiprocessor built on its base,” Tr. Inst. Sist. Anal. RAN 7 (4), 143–150 (2017).

    Google Scholar 

  25. O. S. Aladyshev, A. V. Baranov, R. P. Ionin, E. A. Kiselev, and B. M. Shabanov, “Variants of deployment the high performance computing in clouds,” in Proceedings of the 2018 IEEE Conference of Russian Young Researchers in Electrical and Electronic Engineering EIConRus, pp. 1453–1457.

  26. ExaHyPE—An Exascale Hyperbolic PDE Engine. Accessed Apr. 12, 2019.

  27. T. Skordas, “Horizon 2020,” SPIE Profess. (2012).

    Google Scholar 

  28. M. Biktimirov, A. Zhizhchenko, A. Ovsyannikov, A. Sher, and P. Klimov, “Efficient network connectivity for the Data-center in the distributed ICT infrastructure,” Mekh. Upravl. Inform. 6 (51), 33–40 (2014).

    Google Scholar 

  29. A. N. Sotnikov, I. N. Sobolevskaya, S. A. Kirillov, and I. N. Cherednichenko, “Subject-oriented and interdisciplinary digital collections in the electronic environment knowledge,” CEUR Workshop Proc. 2260 448–453 (2018).

    Google Scholar 

Download references


The work was carried out at the Joint Supercomputer Center, Russian Academy of Sciences, as part of the state assignment, research topic: 0065-2019-0016 (reg. no. AAAA-A19-119011590098-8).

Author information

Authors and Affiliations


Corresponding authors

Correspondence to G. I. Savin, B. M. Shabanov, P. N. Telegin or A. V. Baranov.

Additional information

Submitted by A. M. Elizarov

Rights and permissions

Reprints and Permissions

About this article

Verify currency and authenticity via CrossMark

Cite this article

Savin, G.I., Shabanov, B.M., Telegin, P.N. et al. Joint Supercomputer Center of the Russian Academy of Sciences: Present and Future. Lobachevskii J Math 40, 1853–1862 (2019).

Download citation

  • Received:

  • Revised:

  • Accepted:

  • Published:

  • Issue Date:

  • DOI:

Keywords and phrases

  • supercomputer center
  • data center
  • energy efficiency
  • computer cluster
  • supercomputer management system
  • cooling technology