Skip to main content
Log in

Numerical analysis of parallel implementation of the reorthogonalized ABS methods

  • Original Paper
  • Published:
Central European Journal of Operations Research Aims and scope Submit manuscript

Abstract

Solving systems of equations is a critical step in various computational tasks. We recently published a novel ABS-based reorthogonalization algorithm to compute the QR factorization. Experimental analysis on the Matlab 2015a platform revealed that this new ABS-based algorithm was able to more accurately calculate the rank of the coefficient matrix, the determination of the orthogonal bases and the QR factorization than the built-in rank or qr Matlab functions. However, the reorthogonalization process significantly increased the computation cost. Therefore, we tested a new approach to accelerate this algorithm by implementing it on different parallel platforms. The above mentioned ABS-based reorthogonalization algorithm was implemented using Matlab’s parallel computing toolbox and accelerated massive parallelism (AMP) runtime library. We have tested various matrices including Pascal, Vandermonde and randomly generated dense matrices. The performance of the parallel algorithm was determined by calculating the speed-up factor defined as the fold reduction of execution time compared to the sequential algorithm. For comparison, we also tested the effect of parallel implementation of the classical Gram–Schmidt algorithm incorporating a reorthogonalization step. The results show that the achieved speed-up is significant, and also the performance of this practical parallel algorithm increases as the number of equations grows. The results reveal that the reorthogonalized ABS algorithm is practical and efficient. This fact expands the practical usefulness of our algorithms.

This is a preview of subscription content, log in via an institution to check access.

Access this article

Price excludes VAT (USA)
Tax calculation will be finalised during checkout.

Instant access to the full article PDF.

Fig. 1
Fig. 2
Fig. 3
Fig. 4
Fig. 5
Fig. 6
Fig. 7

Similar content being viewed by others

Notes

  1. Guide MATLAB User’s, The MathWorks Inc., Natick 2015.

  2. Guide MATLAB User’s, The MathWorks Inc., Natick 2015.

  3. Multiprecision Computing Toolbox User’s Manual, Advanpix’s, Yokohama, Japan. http://www.advanpix.com/documentation/users-manual/.

  4. General–Purpose Computation on Graphics Hardware, http://gpgpu.org/.

  5. NDVIA CUDA reference, https://developer.nvidia.com/cuda-gpus.

  6. C++ AMP reference, https://msdn.microsoft.com/en-us/library/hh265137.aspx.

  7. The majority of current desktop CPUs contain an integrated graphics processor, thus sometimes the Accelerated Processing Unit annotation is used instead when referencing an on-chip heterogeneous architecture.

  8. Basic Linear Algebra Subroutines.

References

  • Abaffy J, Fodor S (2015) Reorthogonalization methods in ABS classes. Acta Polytech Hung 12(6):23–41

    Google Scholar 

  • Abaffy J, Broyden C, Spedicato E (1984) A class of direct methods for linear-systems. Numer Math 45(3):361–376

    Article  Google Scholar 

  • Agullo E, Demmel J, Dongarra J, Hadri B, Kurzak J, Langou J, Ltaief H, Luszczek P, Tomov S (2009) Numerical linear algebra on emerging architectures: the plasma and magma projects. J Phys: Conf Ser 180:012037

    Google Scholar 

  • Akimova E, Belousov D (2012) Parallel algorithms for solving linear systems with block-tridiagonal matrices on multi-core CPU with GPU. J Comput Sci 3(6):445–449

    Article  Google Scholar 

  • Andrew R, Dingle N (2014) Implementing QR factorization updating algorithms on GPUs. Parallel Comput 40(7):161–172

    Article  Google Scholar 

  • Fodor S (2001) Symmetric and non-symmetric ABS methods for solving Diophantine systems of equations. Ann Oper Res 103(1):291–314

    Article  Google Scholar 

  • Hegedüs CJ (2015) Reorthogonalization methods revisited. Acta Polytech Hung 12(8):7–28

    Google Scholar 

  • Humphrey JR, Price DK, Spagnoli KE, Paolini AL, Kelmelis EJ (2010 April) CULA: hybrid GPU accelerated linear algebra routines. In: Modeling and simulation for defense systems and applications, vol 7705. International Society for Optics and Photonics, p 770502

  • Lingen FJ (2000) Efficient Gram–Schmidt orthonormalisation on parallel computers. Int J Numer Methods Biomed Eng 16(1):57–66

    Google Scholar 

  • Press WH, Flannery BP, Teukolsky SA, Vetterling WT (2007) Numerical recipes: the art of scientific computing, 3rd edn. Cambridge University Press, New York

    Google Scholar 

  • Reese J, Zaranek S (2011) GPU programming in MATLAB, MathWorks Technical Technical Articles and Newsletters. https://www.mathworks.com/company/newsletters/articles/gpu-programming-in-matlab.html

  • Smith WS (2012) Fixed versus floating point. The Scientist and Engineer’s, Guide to Digital Signal Processing. California Technical Pub., ISBN 0966017633, 1997 Retrieved December 31, p 514

  • Suh JW, Kim Y (2013) Accelerating MATLAB with GPU computing: a primer with examples. Newnes, London

    Google Scholar 

  • Volkov V, Demmel J (2008) LU, QR and Cholesky factorizations using vector capabilities of GPUs. Tech. rep., UC Berkeley

Download references

Acknowledgements

The authors would like to thank József Abaffy and Attila Mócsai for helpful suggestions.

Author information

Authors and Affiliations

Authors

Corresponding author

Correspondence to Szabina Fodor.

Rights and permissions

Reprints and permissions

About this article

Check for updates. Verify currency and authenticity via CrossMark

Cite this article

Fodor, S., Németh, Z. Numerical analysis of parallel implementation of the reorthogonalized ABS methods. Cent Eur J Oper Res 27, 437–454 (2019). https://doi.org/10.1007/s10100-018-0557-4

Download citation

  • Published:

  • Issue Date:

  • DOI: https://doi.org/10.1007/s10100-018-0557-4

Keywords

Navigation