Advertisement

Optimizing the parameters of the Lustre-file-system-based HPC system for reverse time migration

  • Vladimir O. RybintsevEmail author
Article
  • 26 Downloads

Abstract

The article is dedicated to optimizing the computing cluster performance and throughput of the storage with the Lustre file system to perform the reverse time migration task. Optimization is needed because increasing the number of processor cores at first helps to speed up the calculation process, but at some point the calculations start to take more time despite adding more cores with the same storage throughput. This behaviour is caused by the fact that performance gain achieved through parallel operation is offset by increasing delays in the storage that grow nonlinearly as the load increases. To take into account the particularities of this case study, the specific task complexity notion that defines the number of floating point operations per input/output byte was used. This notion and the queuing theory fundamentals were applied to produce a simple formula that links the Lustre file system storage throughput, quantity of storage nodes and the cluster performance in the optimal configuration. LINPACK and SPC-2 benchmark results were used as initial data.

Keywords

High-performance computing Reverse time migration Disc array throughput Lustre file system Queuing model Specific task complexity 

Notes

Acknowledgements

Funding was provided by MPEI.

References

  1. 1.
    Sava P, Hill SJ (2009) Overview and classification of wavefield seismic imaging methods. Lead Edge 28(2):170–183.  https://doi.org/10.1190/1.3086052 CrossRefGoogle Scholar
  2. 2.
    Fu H, Gan L et al (2014) Scaling reverse time migration performance through reconfigurable dataflow engines. IEEE Micro 32(1):30–40.  https://doi.org/10.1109/MM.2013.111 CrossRefGoogle Scholar
  3. 3.
    Hao X, Yang L (2018) Reverse-time migration using multidirectional wavefield decomposition method. Appl Geophys 15(2):222–233.  https://doi.org/10.1007/s11770-018-0670-0 CrossRefGoogle Scholar
  4. 4.
    Nunes-do-Rosário DS, Xavier-de-Souza S et al (2015) Parallel Scalability of a fine-grain prestack reverse time migration algorithm. IEEE Geosci Remote Sens Lett 12(12):2433–2437.  https://doi.org/10.1109/LGRS.2015.2482481 CrossRefGoogle Scholar
  5. 5.
    Said I, Fortin P, Lamotte J-L, Calandra H (2017) Leveraging the accelerated processing units for seismic imaging: a performance and power efficiency comparison against CPUs and GPUs. Int J High Perform Comput Appl 32(6):819–837.  https://doi.org/10.1177/1094342017696562 CrossRefGoogle Scholar
  6. 6.
    Araya-Polo M, Cabezas J et al (2011) Assessing accelerator-based HPC reverse time migration. IEEE Trans Parallel Distrib Syst 22(1):147–162.  https://doi.org/10.1109/TPDS.2010.144 CrossRefGoogle Scholar
  7. 7.
    Assis I, Oliveira A et al (2019) Distributed-memory load balancing with cyclic token-based work-stealing applied to reverse time migration. IEEE Access 7:128419–128430.  https://doi.org/10.1109/ACCESS.2019.2939100 CrossRefGoogle Scholar
  8. 8.
    Akanksha SK, Kumar GN et al (2016) Parallelization of reverse time migration using MPI + OpenMP. In: The 2016 International Conference on Advanced Communication Control and Computing Technologies (ICACCCT 2016).  https://doi.org/10.1109/ICACCCT.2016.7831729
  9. 9.
    Ahmad SG, Liew CS, Rafique M, Munir EU (2017) Optimization of data intensive workflows in stream-based data processing models. J Supercomput 73(9):3901–3923.  https://doi.org/10.1007/s11227-017-1991-0 CrossRefGoogle Scholar
  10. 10.
    Weichen W, Jing G, Rui C (2016) Survey of big data storage technology. Internet Things Cloud Comput 4(3):28–33.  https://doi.org/10.11648/j.iotcc.20160403.13 CrossRefGoogle Scholar
  11. 11.
    d’Auriol BJ (2017) High-bandwidth flexible interconnections in the all-optical linear array with a reconfigurable pipelined bus system (OLARPBS) optical conduit parallel computing model. J Supercomput 73(2):900–922.  https://doi.org/10.1007/s11227-017-1957-2 CrossRefGoogle Scholar
  12. 12.
    Jamiy E, Azouazi M et al (2015) An effective storage mechanism for high performance computing (HPC). Int J Adv Comput Sci Appl (IJACSA) 6(10):186–188.  https://doi.org/10.14569/IJACSA.2015.061026 CrossRefGoogle Scholar
  13. 13.
    Lee S, Hyun SJ et al (2018) Fair bandwidth allocating and strip-aware prefetching for concurrent read streams and striped RAIDs in distributed file systems. J Supercomput 74(8):3904–3932.  https://doi.org/10.1007/s11227-018-2396-4 CrossRefGoogle Scholar
  14. 14.
    Lustre Software Release 2.X. http://doc.lustre.org/lustre_manual.pdf
  15. 15.
    McDade M (2009) Sun C48 & Lustre fast for seismic RTM using Sun X6275. Oracle Blogs. https://blogs.oracle.com/bestperf/sun-c48-lustre-fast-for-seismic-reverse-time-migration-using-sun-x6275
  16. 16.
    Liu X, Li S, Tong W (2015) A queuing model considering resources sharing for cloud service performance. J Supercomput 71(11):4042–4055.  https://doi.org/10.1007/s11227-015-1503-z CrossRefGoogle Scholar
  17. 17.
    Shen C, Tong W et al (2017) Performance modeling of big data applications in the cloud centers. J Supercomput 73(5):2258–2283.  https://doi.org/10.1007/s11227-017-2005-y CrossRefGoogle Scholar
  18. 18.
    Kleinrok L (1975) Queuing systems. Theory, vol 1. Wiley, New YorkGoogle Scholar
  19. 19.
    Librecht A (2010) Queueing network models of zoned RAID systems performance. Dissertation. Imperial College London. http://www.doc.ic.ac.uk/~wjk/publications/lebrecht-2010.pdf
  20. 20.
    SPC Benchmark 2™. Fujitsu storage systems ETERNUS DX200S3 storage array. http://spcresults.org/sites/default/files/results/B00071/b00071_Fujitsu_DX200-S3_SPC-2_full-disclosure-report.pdf
  21. 21.
    SPC Benchmark 2™. Fujitsu storage systems ETERNUS DX8900 S3 storage array. http://spcresults.org/sites/default/files/results/B00079/b00079_Fujitsu_DX8900-S3_SPC-2_full-disclosure-report.pdf
  22. 22.
    Mills RT, Sripathi V, Mahinthakumar G et al (2010) Engineering PFLOTRAN for scalable performance on Cray XT and IBM BlueGene architecture. Argonne National Laboratory. https://www.researchgate.net/publication/51991760_Engineering_PFLOTRAN_for_scalable_performance_on_Cray_XT_and_IBM_BlueGene_architectures

Copyright information

© Springer Science+Business Media, LLC, part of Springer Nature 2019

Authors and Affiliations

  1. 1.National Research University (Moscow Power Engineering Institute)MoscowRussia

Personalised recommendations