Skip to main content

Instruction Level Distributed Processing: Adapting to Future Technology

  • Conference paper
  • First Online:
  • 607 Accesses

Part of the book series: Lecture Notes in Computer Science ((LNCS,volume 1940))

Abstract p ]For the past two decades, the emphasis in processor microarchitecture has been on instruction level parallelism (ILP) — or in increasing performance by increasing the number of “instructions per cycle”. In striving for higher ILP, there has been an ongoing evolution from pipelining to superscalar, with researchers pushing toward increasingly wide superscalar. Emphasis has been placed on wider instruction fetch, higher instruction issue rates, larger instruction windows, and increasing use of prediction and speculation. This trend has led led to very complex, hardware-intensive processors.

For the past two decades, the emphasis in processor microarchitecture has been on instruction level parallelism (ILP) — or in increasing performance by increasing the number of “instructions per cycle”. In striving for higher ILP, there has been an ongoing evolution from pipelining to superscalar, with researchers pushing toward increasingly wide superscalar. Emphasis has been placed on wider instruction fetch, higher instruction issue rates, larger instruction windows, and increasing use of prediction and speculation. This trend has led led to very complex, hardware-intensive processors.

This is a preview of subscription content, log in via an institution.

Buying options

Chapter
USD   29.95
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
eBook
USD   84.99
Price excludes VAT (USA)
  • Available as PDF
  • Read on any device
  • Instant download
  • Own it forever
Softcover Book
USD   109.99
Price excludes VAT (USA)
  • Compact, lightweight edition
  • Dispatched in 3 to 5 business days
  • Free shipping worldwide - see info

Tax calculation will be finalised at checkout

Purchases are for personal use only

Learn about institutional subscriptions

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Doug Burger and James R. Goodman, eds., “Billion Transistor Architectures”, special issue, IEEE Computer, Sept. 1997.

    Google Scholar 

  2. V. Agarwal, M. S. Hrishikesh, S. W. Keckler, D. Burger, “Clock Rate versus IPC: The End of the Road for Conventional Microarchitectures,” 27th Int. Symp. on Computer Architecture, pp. 248–259, June 2000.

    Google Scholar 

  3. S. Thompson, P. Packan, and M. Bohr, “MOS Scaling: Transistor Challenges for the 21st Century,” Intel Technology Journal, Q3, 1998.

    Google Scholar 

  4. S. Palacharla, N. Jouppi, J. E. Smith, “Complexity-Effective Superscalar Processors,” 24th Int. Symp. on Computer Architecture,, pp. 206–218, June 1997.

    Google Scholar 

  5. L. Gwennap, “Digital 21264 Sets New Standard,” Microprocessor Report, pp. 11–16, Oct. 1996.

    Google Scholar 

  6. A. Roth and G. Sohi, “Effective Jump-Pointer Prefetching for Linked Data Structures,” 26th Int. Symp. on Computer Architecture,, pp. 111–121, May 1999.

    Google Scholar 

  7. G. Reinman, T. Austin, B. Calder, “A Scalable Front-End Architecture for Fast Instruction Delivery,” 26th Int. Symposium on Computer Architecture, pp. 234–245, May 1999.

    Google Scholar 

  8. Yuan Chou and J. P. Shen, “Instruction Path Coprocessors,” 27th Int. Symposium on Computer Architecture, pp. 270–281, June 2000.

    Google Scholar 

  9. Timothy Heil and J. E. Smith, “Concurrent Garbage Collection Using Hardware-Assisted Profiling,” International Symposium on Memory Management (ISMM), October 2000.

    Google Scholar 

  10. T. Austin, “DIVA: A Reliable Substrate for Deep Submicron Microarchitecture Design,” 32nd Int. Symposium on Microarchitecture, pp. 196–297, Nov. 1999.

    Google Scholar 

  11. K. Ebcioglu and E. R. Altman, “DAISY: Dynamic Compilation for 100% Architecture Compatibility,” 24th Int. Symp. on Computer Architecture,, June 1997.

    Google Scholar 

  12. A. Klaiber, “The Technology Behind Crusoe Processors,” Transmeta Technical Brief, 2000.

    Google Scholar 

  13. J. E. Smith, T. Heil, S. Sastry, T. Bezenek, “Achieving High Performance via Co-Designed Virtual Machines,” Intl. Workshop on Innovative Architecture for Future Generation High-Performance Processors and Systems, pp. 77–84, Oct. 1998.

    Google Scholar 

  14. D. H. Albonesi, “The Inherent Energy Efficiency of Complexity-Adaptive Processors,” 1998 Power-Driven Microarchitecture Workshop, pp. 107–112, June 1998.

    Google Scholar 

Download references

Author information

Authors and Affiliations

Authors

Editor information

Editors and Affiliations

Rights and permissions

Reprints and permissions

Copyright information

© 2000 Springer-Verlag Berlin Heidelberg

About this paper

Cite this paper

Smith, J.E. (2000). Instruction Level Distributed Processing: Adapting to Future Technology. In: Valero, M., Joe, K., Kitsuregawa, M., Tanaka, H. (eds) High Performance Computing. ISHPC 2000. Lecture Notes in Computer Science, vol 1940. Springer, Berlin, Heidelberg. https://doi.org/10.1007/3-540-39999-2_1

Download citation

  • DOI: https://doi.org/10.1007/3-540-39999-2_1

  • Published:

  • Publisher Name: Springer, Berlin, Heidelberg

  • Print ISBN: 978-3-540-41128-4

  • Online ISBN: 978-3-540-39999-5

  • eBook Packages: Springer Book Archive

Publish with us

Policies and ethics