Mathematical systems theory

, Volume 25, Issue 2, pp 141–159 | Cite as

On the time and space complexity of computation using write-once memory or is pen really much worse than pencil?

  • Sandy Irani
  • Moni Naor
  • Ronitt Rubinfeld
Article

Abstract

We introduce a model of computation based on the use of write-once memory. Write-once memory has the property that bits may be set but not reset. Our model consists of a RAM with a small amount of regular memory (such as logarithmic ornα for α<1, wheren is the size of the problem) and a polynomial amount of write-once memory. Bounds are given on the time required to simulate on write-once memory algorithms which originally run on a RAM with a polynomial amount of regular memory. We attempt to characterize algorithms that can be simulated on our write-once memory model with very little slow-down. A persistent computation is one in which, at all times, the memory state of the computation at any previous point in time can be reconstructed. We show that any data structure or computation implemented on this write-once memory model can be made persistent without sacrificing much in the way of running time or space. The space requirements of algorithms running on the write-once model are studied. We show that general simulations of algorithms originally running on a RAM with regular memory by algorithms running on our write-once memory model require space proportional to the number of steps simulated. In order to study the space complexity further, we define an analogue of the pebbling game, called the pebble-sticker game. A sticker is different from a pebble in that it cannot be removed once placed on a node of the computation graph. As placing pebbles correspond to writes to regular memory, placing stickers correspond to writes to the write-once memory. Bounds are shown on pebble-sticker tradeoffs required to evaluate trees and planar graphs. Finally, we define the complexity class WO-PSPACE as the class of problems which can be solved with a polynomial amount of write-once memory, and show that it is equal toP.

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. [ACS] Aggarwal, A., Chandra, A., Snir, M., Hierarchical Memory with Block Transfer,Proceedings of the 28th Annual Symposium on Foundations of Computer Science, Los Angeles, CA, October 1987, pp. 204–216.Google Scholar
  2. [ANP] Arvind, Nikhil, R. S., Pingali, K. K., I-structures: Data Structures for Parallel Computing,ACM Transactions on Programming Languages and Systems, Vol. 11, No. 4, October 1989, pp. 598–632.CrossRefGoogle Scholar
  3. [BC] Borodin, A., Cook, S., A Time-Space Tradeoff for Sorting on a General Sequential Model of Computation,SIAM Journal on Computing, Vol. 11, No. 2, May 1982, pp. 287–297.MATHCrossRefMathSciNetGoogle Scholar
  4. [B] Brent, R. P., The Parallel Evaluation of General Arithmetic Expressions,Journal of the Association for Computing Machinery, Vol. 21, 1974, pp. 201–208.MATHMathSciNetGoogle Scholar
  5. [DMMU] Dolev, D., Maier, D., Mairson, H., Ullman, J., Correcting Faults in a Write-Once Memory,Proceedings of the 16th Annual ACM Symposium on Theory of Computing, Washington, DC, May 1984, pp. 225–229.Google Scholar
  6. [DSST] Driscoll, J., Sarnak, N., Sleator, D., Tarjan, R., Making Data Structures Persistent,Proceedings of the 18th Annual ACM Symposium on Theory of Computing, Berkeley, CA, May 1986, pp. 109–121.Google Scholar
  7. [EKZ] Emde Boas, P. v., Kass, R., Zijlstra, E., Design and Implementation of an Efficient Priority Queue,Mathematical Systems Theory, Vol. 10, 1977, pp. 99–127.MATHCrossRefGoogle Scholar
  8. [LT] Lipton, R., Tarjan, R., Applications of a Planar Separator Theorem,SIAM Journal on Computing, Vol. 9, No. 3, August 1980, pp. 615–626.MATHCrossRefMathSciNetGoogle Scholar
  9. [M] Mehlhorn, K., Pebbling Mountain Ranges and Its Application to DCFL-Recognition, 1979.Google Scholar
  10. [PH] Paterson, M. S., Hewitt, C. E., Comparative Schematology,Proceedings of the MAC Conference on Concurrent Systems and Parallel Computation, 1970, pp. 119–127.Google Scholar
  11. [P] Pippenger, N., Pebbling,Proceedings of the 5th IBM Symposium on Mathematical Foundations of Computer Science: Computational Complexity, May 19, 1980.Google Scholar
  12. [PMN] Ponder, C., McGerr, P., Ng, A., Are Applicative Languages Inefficient?SIGPLAN Notices, Vol. 23, No. 6.Google Scholar
  13. [RS] Rivest, R. L., Shamir, A., How To Reuse a “Write-Once” Memory,Information and Control, Vol. 55, Nos. 1–3, 1982, pp. 1–19.MATHCrossRefMathSciNetGoogle Scholar
  14. [ST] Sarnak, N., Tarjan, R. E., Planar Point Location Using Persistent Search Trees,Communications of the ACM, July 1986, Vol. 29, No. 7, pp. 669–679.CrossRefMathSciNetGoogle Scholar
  15. [Ta] Tarjan, R. E.,Data Structures and Network Algorithms, Society for Industrial and Applied Mathematics, Philadelphia, 1983.Google Scholar
  16. [To] Touati, H., Personal communication.Google Scholar
  17. [Va] Valiant, L. G., Graph-Theoretic Arguments in Low-Level Complexity,Theoretical Computer Science, 1977.Google Scholar
  18. [Vi1] Vitter, J. S., Computational Complexity of an Optical Disk Interface,Proceedings of the 11th Annual International Colloquium on Automata, Languages, and Programming (ICALP), Antwerp, July 1984, pp. 490–502.Google Scholar
  19. [Vi2] Vitter, J. S., An Efficient I/O Interface for Optical Disks,ACM Transactions on Database Systems, Vol. 10, No. 2, June 1985, pp. 129–162.CrossRefMathSciNetGoogle Scholar

Copyright information

© Springer-Verlag New York Inc 1992

Authors and Affiliations

  • Sandy Irani
    • 1
  • Moni Naor
    • 2
  • Ronitt Rubinfeld
    • 1
  1. 1.Computer Science DivisionUniversity of CaliforniaBerkeleyUSA
  2. 2.IBM Almaden Research CenterSan JoseUSA

Personalised recommendations