Abstract

Companies such as Google, Yahoo and Microsoft maintain extremely large data repositories within which searches are frequently conducted. In an article entitled “Data-Intensive Supercomputing: The case for DISC” Randal Bryant describes such data repositories and suggests an agenda for appying them more broadly to massive data set problems of importance to the scientific community and society in general.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. 1.
    Bregman, L.M.: Some properties of nonnegative matrices and their permanents. Soviet Math. Dokl 14, 945–949 (1973)MATHGoogle Scholar
  2. 2.
    Bryant, R.E.: Data-intensive supercomputing: the case for DISC.Technical Report CMU-CS-07-128, Carnegie-Mellon University School of Computer Science (2007)Google Scholar
  3. 3.
    Chazelle, B.: The soft heap: an approximate priority queue with optimal error rate. Journal of the ACM 47 (2000)Google Scholar
  4. 4.
    Floyd, R.W., Rivest, R.L.: Expected time bounds for selection. Communications of the ACM 18(30), 165–172 (1975)MATHCrossRefGoogle Scholar
  5. 5.
    Munro, J.I., Paterson, M.S.: Selection and sorting with limited storage. Theoretical Computer Science 12, 315–323 (1980)MathSciNetMATHCrossRefGoogle Scholar
  6. 6.
    Prodinger, H.: Multiple quickselect: Hoare’s find algorithm for several elements. Information Processing Letters 56, 123–129 (1995)MathSciNetMATHCrossRefGoogle Scholar
  7. 7.
    Vitter, J.S.: Random sampling with a reservoir. ACM Trans. on Math Software 11(1), 37–57 (1985)MathSciNetMATHCrossRefGoogle Scholar

Copyright information

© Springer-Verlag Berlin Heidelberg 2007

Authors and Affiliations

  • Richard M. Karp
    • 1
  1. 1.International Computer Science Institute, Berkeley, USA, and University of California at BerkeleyUSA

Personalised recommendations