A Study on Vectorization Methods for Multicore SIMD Architecture Provided by Compilers

Part of the Advances in Intelligent Systems and Computing book series (AISC, volume 248)


SIMD vectorization has received important attention within the last few years as a vital technique to accelerate multimedia, scientific applications and embedded applications on SIMD architectures. SIMD has extensive applications; though the majority and focus has been on multimedia. As a result of it is an area of computing that desires the maximum amount of computing power as possible, and in most of the cases, it is necessary to compute plenty of data at one go. This makes it an honest candidate for parallelization. There are many compiler frameworks which allow vectorization such as Intel ICC, GNU GCC and LLVM etc. In this paper, we will discuss about GNU GCC and LLVM compilers, optimization methods, vectorization methods and evaluate the impact of various vectorization methods supported by these compilers and at last note we will discuss about the methods to enhance the vectorization process.


Intel ICC GNU GCC LLVM SIMD Vectorization 


Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.


  1. 1.
    Xu, J., Shaozhong, G., Lei, W.: Optimization Technology in SIMD Mathematical Functions Based on Vector Register Reuse. In: 2012 IEEE 14th International Conference on High Performance Computing and Communications, pp. 1102–1107 (2012)Google Scholar
  2. 2.
    Peter, K., Yu, H., Li, Z., Tian, X.: Performance Study of SIMD Programming Models on Intel Multicore Processors. In: 2012 IEEE 26th International Symposium Workshops & PhD Forum on Parallel and Distributed Processing, pp. 2423–2432 (2012)Google Scholar
  3. 3.
    Lee, C.-Y., Chang, J.-C., Chang, R.-G.: Compiler Optimization to Reduce Cache Power with Victim Cache. In: 2012 IEEE 9th International Conference on Ubiquitous Intelligence & Computing and 2012 9th International Conference on Autonomic & Trusted Computing, pp. 841–844 (2012)Google Scholar
  4. 4.
    Stefan, H., Schoeberl, M.: Worst-Case Execution Time Based Optimization of Real-Time Java Programs. In: 2012 IEEE 15th International Symposium on Object/Component/Service-Oriented Real-Time Distributed Computing, pp. 64–70 (2012)Google Scholar
  5. 5.
    Desai, N.P.: A Novel Technique for Orchestration of Compiler Optimization Functions Using Branch and Bound Strategy. In: 2009 IEEE International Conference on Advance Computing, pp. 467–472 (2009)Google Scholar
  6. 6.
    Tian, X., Saito, H., Girkar, M., Preis, S.V., Kozhukhov, S.S., Cherkasov, A.G., Nelson, C., Panchenko, N., Geva, R.: Compiling C/C++ SIMD Extensions for Function and Loop Vectorizaion on Multicore-SIMD Processors. In: 2012 IEEE 26th International Symposium Workshops & PhD Forum on Parallel and Distributed Processing, pp. 2349–2358 (2012)Google Scholar
  7. 7.
    Aho, A.V., Ullman, J.D.: Principles of Compiler Design (Addison-Wesley series in computer science and information processing). Addison-Wesley Longman Publishing Co., Inc. (1977)Google Scholar
  8. 8.
    Lattner, C.A.: LLVM: An infrastructure for multi-stage optimization. PhD dissertation. University of Illinois (2002)Google Scholar
  9. 9.
  10. 10.
    LLVM optimization and passes options,
  11. 11.
    Amit, B., Joshi, B.K.: A parallel lexical analyzer for multi-core machines. In: 2012 IEEE Sixth International Conference on Software Engineering, pp. 1–3 (2012)Google Scholar
  12. 12.
    Qawasmeh, A., Chapman, B., Banerjee, A.: A Compiler-Based Tool for Array Analysis in HPC Applications. In: IEEE 41st International Conference Parallel Processing Workshops, pp. 454–463 (2012)Google Scholar

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.Department of Computer Science and EngineeringDr. B.R. Ambedkar National Institute of TechnologyJalandharIndia

Personalised recommendations