A novel dynamic network data replication scheme based on historical access record and proactive deletion
- 404 Downloads
Data replication is becoming a popular technology in many fields such as cloud storage, Data grids and P2P systems. By replicating files to other servers/nodes, we can reduce network traffic and file access time and increase data availability to react natural and man-made disasters. However, it does not mean that more replicas can always have a better system performance. Replicas indeed decrease read access time and provide better fault-tolerance, but if we consider write access, maintaining a large number of replications will result in a huge update overhead. Hence, a trade-off between read access time and write updating cost is needed. File popularity is an important factor in making decisions about data replication. To avoid data access fluctuations, historical file popularity can be used for selecting really popular files. In this research, a dynamic data replication strategy is proposed based on two ideas. The first one employs historical access records which are useful for picking up a file to replicate. The second one is a proactive deletion method, which is applied to control the replica number to reach an optimal balance between the read access time and the write update overhead. A unified cost model is used as a means to measure and compare the performance of our data replication algorithm and other existing algorithms. The results indicate that our new algorithm performs much better than those algorithms.
KeywordsData replication Read overhead Update overhead Historical access record Proactive deletion
Unable to display preview. Download preview PDF.
- 1.Abdulla G (1998) Analysis and modeling of world wide web traffic. Ph.D. Thesis. Virginia Polytechnic Institute and State University, Virginia, USA Google Scholar
- 2.Wei Q, Veeravalli B, Gong B, Zeng L, Feng D (2010) CDRM: a cost-effective dynamic replication management scheme for cloud storage cluster. In: 2010 IEEE international conference on cluster computing Google Scholar
- 3.Ghemawat S, Gobioff H, Leung S-T (2003) The Google file system. In: Proceedings of 19th ACM symposium on operating systems principles (SOSP 2003), New York, USA, October 2003 Google Scholar
- 4.Weil SA, Brandt SA, Miller EL, Long DDE, Maltzahn C (2006) Ceph: a scalable, high-performance distributed file system. In: Proceeding of 7th conference on operating system design and implementation (OSDI’06), November 2006 Google Scholar
- 5.The Apache Software Foundation (2011) Hadoop. http://hadoop.apache.org/core
- 8.Ranganathan K, Foster I (2001) Identifying dynamic replication strategies for a high-performance data grids. In: International workshop on grid computing, Denver, USA, 2001 Google Scholar
- 21.Rasool Q, Li J, Zhang S (2009) Replica placement in multi-tier data grid. In: IEEE international conference on dependable, autonomic and secure computing. Google Scholar
- 22.Half-life (2011) http://en.wikipedia.org/wiki/Half-life