Advertisement

An Approach for Compiler Optimization to Exploit Instruction Level Parallelism

Conference paper
Part of the Smart Innovation, Systems and Technologies book series (SIST, volume 28)

Abstract

Instruction Level Parallelism (ILP) is not the new idea. Unfortunately ILP architecture not well suited to for all conventional high level language compilers and compiles optimization technique. Instruction Level Parallelism is the technique that allows a sequence of instructions derived from a sequential program (without rewriting) to be parallelized for its execution on multiple pipelining functional units. As a result, the performance is increased while working with current softwares. At implicit level it initiates by modifying the compiler and at explicit level it is done by exploiting the parallelism available with the hardware. To achieve high degree of instruction level parallelism, it is necessary to analyze and evaluate the technique of speculative execution control dependence analysis and to follow multiple flows of control. The researchers are continuously discovering the ways to increase parallelism by an order of magnitude beyond the current approaches. In this paper we present impact of control flow support on highly parallel architecture with 2-core and 4-core. We also investigated the scope of parallelism explicitly and implicitly. For our experiments we used trimaran simulator. The benchmarks are tested on abstract machine models created through trimaran simulator.

Keywords

Control flow Graph (CFG) Edition Based Redefinition (EBR) Intermediate Representation (IR) Very Large Instruction Word (VLIW) 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Carr, S.: Combining Optimization for Cache and Instruction Level Parallelism. In: Proceedings of the 1996 Conference on Parallel Architectures and Compilation Techniques (1996)Google Scholar
  2. 2.
    Pnevmatikatos, D.N., Franklin, M., Sohi, G.S.: Control flow prediction for dynamic ILP processors. In: Proceedings of the 26th Annual International Symposium on Microarchitecture, pp. 153–163 (1993)Google Scholar
  3. 3.
    Lo, J., Eggers, S.: Improving Balanced Scheduling with Compiler Optimizations that Increase Instruction-Level Parallelism. In: Proceedings of the Conference on Programming Language Design and Implementation (1995)Google Scholar
  4. 4.
    Zhong, H., Lieberman, S.A., Mahlke, S.A.: Extending Multicore Architectures to Exploit Hybrid Parallelismin Single-thread Applications. In: IEEE 13th International Symposium on High Performance Computer Architecture, pp. 25–36 (2007)Google Scholar
  5. 5.
    Posti, M.A., Greene, D.A., Tyson, G.S., Mudge, T.N.: The Limits of Instruction Level Parallelism in SPEC95 Applications. Advanced Computer Architecture Lab (2000)Google Scholar
  6. 6.
    Hwu, W.-M.W., Mahlke, S.A., Chen, W.Y., Chang, P.P.: The Superblock: An Effective Technique for VLIW and Superscalar Compilation. The Journal of Supercomputing 7, 227–248 (1993)CrossRefGoogle Scholar
  7. 7.
    Kumar, R., Singh, P.K.: A Modern Parallel Register Sharing Architecture for Code Compilation. International Journal of Computer Applications 1(16) (2010)Google Scholar
  8. 8.
    Kumar, R., Singh, P.K.: Role of multiblocks in Control Flow Prediction using Parallel Register Sharing Architecture. International Journal of Computer Applications 4(4), 28–31 (2010)CrossRefGoogle Scholar
  9. 9.
    Kumar, R., Singh, P.K.: Control Flow Prediction through Multiblock Formation in Parallel Register Sharing Architecture. Journal on Computer Science and Engineering 2(4), 1179–1183 (2010)Google Scholar
  10. 10.
    Kumar, R., Saxena, A., Singh, P.K.: A Novel Heuristic for Selection of Hyperblock in If-Conversion. In: 2011 3rd International Conference on Electronics Computer Technology, pp. 232–235 (2011)Google Scholar
  11. 11.
    August, D.I., Hwu, W.-M.W., Mahlke, S.A.: A Framework for Balancing Control Flow and Predication. In: Proceedings of 30th Annual IEEE/ACM International Symposium on Microarchitecture, pp. 92–103 (1997)Google Scholar
  12. 12.
    Pai, V.S., Ranganathan, P., Abdel-Shafi, H., Adve, S.: The Impact of Exploiting Instruction-Level Parallelism on Shared-Memory Multiprocessors. IEEE Transactions on Computers 48(2), 218–226 (1999)CrossRefGoogle Scholar
  13. 13.
    Zhong, H.: Architectural and Compiler Mechanisms for Accelerating Single Thread Applications on Multicore Processors. PhD thesis, The University of Michigan (2008)Google Scholar
  14. 14.

Copyright information

© Springer International Publishing Switzerland 2014

Authors and Affiliations

  1. 1.Uttar Pradesh Technical UniversityLucknowIndia
  2. 2.Madan Mohan Malviya University of TechnologyGorakhpurIndia

Personalised recommendations