Implications of Job Loading and Scheduling Structures on Machine Memory Effectiveness

  • Abraham Ayegba Alfa
  • Sanjay MisraEmail author
  • Francisca N. Ogwueleka
  • Ravin Ahuja
  • Adewole Adewumi
  • Robertas Damasevicius
  • Rytis Maskeliunas
Conference paper
Part of the Lecture Notes in Electrical Engineering book series (LNEE, volume 612)


The reliable parameter for the determining the effectiveness of processing elements (such as memory and processors) is the number of cycles per instructions (CPI), execution speedups, and frequency. Job loading and scheduling techniques target instructions processing in a manner to support the underlying hardware requirements. One of the earliest methods of scheduling jobs for machines involves arrangements of instruction sets in serial order known as pipelining. Another technique uses the principle of overlapping for instruction sets in order to allow current processing and executions which is introduced in this paper. Again, there is job scheduling technique that requires enlargement of processing elements known as static approach as in the case of Intel Itanium. But, there is a great concern about the most appropriate means of scheduling and loading jobs entirely composed of dependent and branched instructions. The cooperative processing nature of present-day computation has expanded the need to allow users to be involved in multiple problems solving environments. In addition, the paper investigates the implications of these job loading and scheduling approaches on speedup and performance of memory systems. The paper found that overlapping of instruction sets during execution was most effective technique for speedups and memory elements performance. In future works, there is need to focus on parallelism exploitations among diverse machines cooperating in instruction processing and execution.


Basic block Speedups Performance Memory Instructions Scheduling Jobs Loading 


  1. 1.
    Flynn MJ (1995) Computer architecture: pipelined and parallel processor design, 1st edn. Jones and Bartlett Publishers, New York, pp 34–55Google Scholar
  2. 2.
    Pepijn W (2012) Simdization transformation strategies – polyhedral transformations and cost estimation. Unpublished M.Sc Thesis, Department of Computer/Electrical Engineering, Delft University of Technology, Netherlands, pp 1–77Google Scholar
  3. 3.
    Markovic N (2015) Hardware thread scheduling algorithms for single-ISA asymmetric CMPS. Unpublished Ph.D. Thesis, Department of Computer Architecture, Universitat Politècnica de Catalunya, Barcelona, Spain, pp 1–124Google Scholar
  4. 4.
    Feitelson DG, Rudolph L (1996) Towards convergence in job schedulers for parallel supercomputers. In: Proceedings of the workshop on job scheduling strategies for parallel processing. SpringerGoogle Scholar
  5. 5.
    Huang C, Orion Lawlor O, Kale LV (2003) Adaptive MPI. In: Proceedings of the 16th international workshop on languages and compilers for parallel computing. LNCS, vol 2958Google Scholar
  6. 6.
    Fernández A, Beltran V, Martorell X, Badia RM, Ayguade E, Labarta J (2014) Task-based programming with OmpSS and its application. In: Parallel processing workshops of lecture notes in computer science, vol 8806, pp 601–612Google Scholar
  7. 7.
    Jack WD, Sanjay J (1995) Improving instruction-level parallelism by loop unrolling and dynamic memory disambiguation. Unpublished M.Sc Thesis of Department of Computer Science, Thornton Hall, University of Virginia, Charlottesville, pp 1–8Google Scholar
  8. 8.
    Ben-Asher Y, Feitelson DG, Rudolph L (1996) ParC – an extension of C for shared memory parallel processing. Softw Pract Exp 26(5):581–612CrossRefGoogle Scholar
  9. 9.
    Feitelson DG (1997) Job scheduling in multiprogrammed parallel systems. IBM T. J. Watson Research Center, Yorktown Heights, NYGoogle Scholar
  10. 10.
    Parthasarathy KA (2011) Performance measures of superscalar processor. Int J Eng Technol 1(3):164–168Google Scholar
  11. 11.
    William S (2006) Computer organization and architecture designing for performance, 8th edn. Prentice Hall, Pearson Education Inc., Upper Saddle River, New Jersey, pp 3–881Google Scholar
  12. 12.
    Garcia E, Gao G (2012) Instruction level parallelism. Publications of CAPSL on Architecture and Parallel Systems Laboratory, University of Delaware, Newark, USA, pp 1–101Google Scholar
  13. 13.
    Seznec A (2007) The L-TAGE branch predictor. J Instr-Level Parallelism 9:1–13Google Scholar
  14. 14.
    Bacon DF, Graham SL, Sharp OJ (1994) Complier transformations for high performance computing. ACM Comput Surv:345–420CrossRefGoogle Scholar
  15. 15.
    Rau BR, Fisher JA (1993) Instruction-level parallel processing: history overview and perspective. J Supercomput 7(7):9–50Google Scholar
  16. 16.
    Pozzi L (2010) Compilation techniques for exploiting instruction level parallelism: a survey. Technical Report 20133, Department of Electrical and Information, University of Milan, Italy, pp 1–31Google Scholar
  17. 17.
    Misra S, Alfa AA, Adewale OS, Akogbe AM, Olaniyi MO (2014) A two-way loop algorithm for exploiting instruction-level parallelism in memory system. In: Beniamino M, Sanjay M, Ana M, Rocha AC (eds) ICCSA 2014. LNCS, vol 8583. Springer, Heidelbreg, pp 255–264Google Scholar
  18. 18.
    Grossman D (2012) A sophomoric: introduction to shared-memory parallelism and concurrency. Lecture Notes, Department of Computer Science & Engineering, University of Washington, Seattle, USA, pp 1–66Google Scholar
  19. 19.
    Vijay SP, Sarita A (1999) Code transformations to improve memory parallelism. In: 32nd annual ACM/IEEE international symposium on microarchitecture. IEEE Computer Society, Haifa, pp 147–155Google Scholar

Copyright information

© Springer Nature Singapore Pte Ltd. 2020

Authors and Affiliations

  • Abraham Ayegba Alfa
    • 1
  • Sanjay Misra
    • 2
    Email author
  • Francisca N. Ogwueleka
    • 3
  • Ravin Ahuja
    • 4
  • Adewole Adewumi
    • 2
  • Robertas Damasevicius
    • 5
  • Rytis Maskeliunas
    • 5
  1. 1.Kogi State College of EducationAnkpaNigeria
  2. 2.Covenant UniversityOttaNigeria
  3. 3.Nigerian Defence AcademyKadunaNigeria
  4. 4.Vishwakarma Skill UniversityGurugramIndia
  5. 5.Kaunas University of TechnologyKaunasLithuania

Personalised recommendations