# On the time and space complexity of computation using write-once memory or is pen really much worse than pencil?

- Received:

- 2 Citations
- 47 Downloads

## Abstract

We introduce a model of computation based on the use of write-once memory. Write-once memory has the property that bits may be set but not reset. Our model consists of a RAM with a small amount of regular memory (such as logarithmic or*n*^{α} for α<1, where*n* is the size of the problem) and a polynomial amount of write-once memory. Bounds are given on the time required to simulate on write-once memory algorithms which originally run on a RAM with a polynomial amount of regular memory. We attempt to characterize algorithms that can be simulated on our write-once memory model with very little slow-down. A persistent computation is one in which, at all times, the memory state of the computation at any previous point in time can be reconstructed. We show that any data structure or computation implemented on this write-once memory model can be made persistent without sacrificing much in the way of running time or space. The space requirements of algorithms running on the write-once model are studied. We show that general simulations of algorithms originally running on a RAM with regular memory by algorithms running on our write-once memory model require space proportional to the number of steps simulated. In order to study the space complexity further, we define an analogue of the pebbling game, called the pebble-sticker game. A sticker is different from a pebble in that it cannot be removed once placed on a node of the computation graph. As placing pebbles correspond to writes to regular memory, placing stickers correspond to writes to the write-once memory. Bounds are shown on pebble-sticker tradeoffs required to evaluate trees and planar graphs. Finally, we define the complexity class WO-PSPACE as the class of problems which can be solved with a polynomial amount of write-once memory, and show that it is equal to*P*.

## Preview

Unable to display preview. Download preview PDF.

### References

- [ACS] Aggarwal, A., Chandra, A., Snir, M., Hierarchical Memory with Block Transfer,
*Proceedings of the 28th Annual Symposium on Foundations of Computer Science*, Los Angeles, CA, October 1987, pp. 204–216.Google Scholar - [ANP] Arvind, Nikhil, R. S., Pingali, K. K., I-structures: Data Structures for Parallel Computing,
*ACM Transactions on Programming Languages and Systems*, Vol. 11, No. 4, October 1989, pp. 598–632.CrossRefGoogle Scholar - [BC] Borodin, A., Cook, S., A Time-Space Tradeoff for Sorting on a General Sequential Model of Computation,
*SIAM Journal on Computing*, Vol. 11, No. 2, May 1982, pp. 287–297.MATHCrossRefMathSciNetGoogle Scholar - [B] Brent, R. P., The Parallel Evaluation of General Arithmetic Expressions,
*Journal of the Association for Computing Machinery*, Vol. 21, 1974, pp. 201–208.MATHMathSciNetGoogle Scholar - [DMMU] Dolev, D., Maier, D., Mairson, H., Ullman, J., Correcting Faults in a Write-Once Memory,
*Proceedings of the 16th Annual ACM Symposium on Theory of Computing*, Washington, DC, May 1984, pp. 225–229.Google Scholar - [DSST] Driscoll, J., Sarnak, N., Sleator, D., Tarjan, R., Making Data Structures Persistent,
*Proceedings of the 18th Annual ACM Symposium on Theory of Computing*, Berkeley, CA, May 1986, pp. 109–121.Google Scholar - [EKZ] Emde Boas, P. v., Kass, R., Zijlstra, E., Design and Implementation of an Efficient Priority Queue,
*Mathematical Systems Theory*, Vol. 10, 1977, pp. 99–127.MATHCrossRefGoogle Scholar - [LT] Lipton, R., Tarjan, R., Applications of a Planar Separator Theorem,
*SIAM Journal on Computing*, Vol. 9, No. 3, August 1980, pp. 615–626.MATHCrossRefMathSciNetGoogle Scholar - [M] Mehlhorn, K., Pebbling Mountain Ranges and Its Application to DCFL-Recognition, 1979.Google Scholar
- [PH] Paterson, M. S., Hewitt, C. E., Comparative Schematology,
*Proceedings of the MAC Conference on Concurrent Systems and Parallel Computation*, 1970, pp. 119–127.Google Scholar - [P] Pippenger, N., Pebbling,
*Proceedings of the 5th IBM Symposium on Mathematical Foundations of Computer Science: Computational Complexity*, May 19, 1980.Google Scholar - [PMN] Ponder, C., McGerr, P., Ng, A., Are Applicative Languages Inefficient?
*SIGPLAN Notices*, Vol. 23, No. 6.Google Scholar - [RS] Rivest, R. L., Shamir, A., How To Reuse a “Write-Once” Memory,
*Information and Control*, Vol. 55, Nos. 1–3, 1982, pp. 1–19.MATHCrossRefMathSciNetGoogle Scholar - [ST] Sarnak, N., Tarjan, R. E., Planar Point Location Using Persistent Search Trees,
*Communications of the ACM*, July 1986, Vol. 29, No. 7, pp. 669–679.CrossRefMathSciNetGoogle Scholar - [Ta] Tarjan, R. E.,
*Data Structures and Network Algorithms*, Society for Industrial and Applied Mathematics, Philadelphia, 1983.Google Scholar - [To] Touati, H., Personal communication.Google Scholar
- [Va] Valiant, L. G., Graph-Theoretic Arguments in Low-Level Complexity,
*Theoretical Computer Science*, 1977.Google Scholar - [Vi1] Vitter, J. S., Computational Complexity of an Optical Disk Interface,
*Proceedings of the 11th Annual International Colloquium on Automata, Languages, and Programming (ICALP)*, Antwerp, July 1984, pp. 490–502.Google Scholar - [Vi2] Vitter, J. S., An Efficient I/O Interface for Optical Disks,
*ACM Transactions on Database Systems*, Vol. 10, No. 2, June 1985, pp. 129–162.CrossRefMathSciNetGoogle Scholar