Advertisement

A reliable and energy-efficient storage system with erasure coding cache

  • Ji-guang Wan
  • Da-ping Li
  • Xiao-yang Qu
  • Chao Yin
  • Jun Wang
  • Chang-sheng Xie
Article
  • 42 Downloads

Abstract

In modern energy-saving replication storage systems, a primary group of disks is always powered up to serve incoming requests while other disks are often spun down to save energy during slack periods. However, since new writes cannot be immediately synchronized into all disks, system reliability is degraded. In this paper, we develop a high-reliability and energy-efficient replication storage system, named RERAID, based on RAID10. RERAID employs part of the free space in the primary disk group and uses erasure coding to construct a code cache at the front end to absorb new writes. Since code cache supports failure recovery of two or more disks by using erasure coding, RERAID guarantees a reliability comparable with that of the RAID10 storage system. In addition, we develop an algorithm, called erasure coding write (ECW), to buffer many small random writes into a few large writes, which are then written to the code cache in a parallel fashion sequentially to improve the write performance. Experimental results show that RERAID significantly improves write performance and saves more energy than existing solutions.

Key words

Reliability Energy-efficient Storage system Erasure coding Cache management 

CLC number

TP316.4 

Preview

Unable to display preview. Download preview PDF.

Unable to display preview. Download preview PDF.

References

  1. Amazon, 2007. Amazon S3: Object Storage Built to Store and Retrieve Any Amount of Data from Anywhere. http://aws.amazon.com/s3/Google Scholar
  2. Amur, H., Cipar, J., Gupta, V., et al., 2010. Robust and flexible power-proportional storage. Proc. 1st ACM Symp. on Cloud Computing, p.217–228. https://doi.org/10.1145/1807128.1807164Google Scholar
  3. Bhadkamkar, M., Guerra, J., Useche, L., et al., 2009. BORG: Block-reORGanization for self-optimizing storage systems. Proc. Usenix Conf. on File and Storage Technologies, p.183–196.Google Scholar
  4. Blaum, M., Brady, J., Bruck, J., et al., 1994. EVENODD: an optimal scheme for tolerating double disk failures in RAID architectures. Proc. 21st Int. Symp. on Computer Architecture, p.245–254. https://doi.org/10.1109/isca.1994.288145CrossRefGoogle Scholar
  5. Borthaku, D., 2010. What is Apache Hadoop? http:// hadoop.apache.org/Google Scholar
  6. Chen, Y., Hsu, W., Young, H., 2000. Logging RAID— an approach to fast, reliable, and low-cost disk arrays. Euro-Par, p.1302–1312. https://doi.org/10.1007/3-540-44520-x_182Google Scholar
  7. Colarelli, D., Grunwald, D., 2002. Massive arrays of idle disks for storage archives. Proc. ACM/IEEE Conf. on Supercomputing, p.1–11. https://doi.org/10.1109/sc.2002.10058Google Scholar
  8. Corbett, P., English, B., Goel, A., et al., 2004. Row-diagonal parity for double disk failure correction. Proc. 3rd USENIX Conf. on File and Storage Technologies, p.1–14.Google Scholar
  9. Department of Energy, 2012. NETL shares computing speed, efficiency to tackle barriers. Fossil Energy Today, 1(6):1–3.Google Scholar
  10. EMC, 2008. ATMOS: Big. Smart. Elastic. http://www.emc. com/storage/atmos/atmos.htmGoogle Scholar
  11. Eom, H., Hollingsworth, J.K., 2000. Speed vs. accuracy in simulation for I/O-intensive applications. Proc. 14th Int. Parallel and Distributed Processing Symp., p.315–322. https://doi.org/10.1109/ipdps.2000.846001Google Scholar
  12. Ghemawat, S., Gobioff, H., Leung, S., 2003. The Google File System. Proc. 19th ACM Symp. on Operating Systems Principles, p.29–43. https://doi.org/10.1145/945445.945450Google Scholar
  13. Hu, Y., Yang, Q., 1996. DCD—disk caching disk: a new approach for boosting I/O performance. ACM SIGARCH Comput. Archit. News, 24(2):169–178. https://doi.org/10.1145/232974.232991CrossRefGoogle Scholar
  14. Li, D., Wang, J., 2004. EERAID: energy-efficient redundant and inexpensive disk array. Proc. 11th ACM SIGOPS European Workshop, p.29. https://doi.org/10.1145/1133572.1133577Google Scholar
  15. Li, D., Wang, J., 2006. eRAID: a queuing model based energy saving policy. Proc. IEEE Int. Symp. on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, p.77–86. https://doi.org/10.1109/mascots.2006.23Google Scholar
  16. Lu, L., Varman, P.J., Wang, J., 2007. DiskGroup: energy efficient disk layout for RAID1 systems. Proc. Int. Conf. on Networking, Architecture, and Storage, p.233–242. https://doi.org/10.1109/nas.2007.21Google Scholar
  17. Mao, B., Feng, D., Jiang, H., et al., 2008. GRAID: a green RAID storage architecture with improved energy efficiency and reliability. Proc. IEEE Int. Symp. on Modeling, Analysis, and Simulation of Computer and Telecommunication Systems, p.113–120. https://doi.org/10.1109/mascot.2008.4770574Google Scholar
  18. Menon, J., 1995. A performanee comparison of RAID-5 and log-struetured arrays. Proc. 4th IEEE In. Symp. on High Performance Distributed Computing, p.167–178. https://doi.org/10.1109/hpdc.1995.518707CrossRefGoogle Scholar
  19. Patterson, D.A., Gibson, G., Katz, R.H., 1988. A case for redundant arrays of inexpensive disks (RAID). Proc. ACM SIGMOD Int. Conf. on Management of Data, p.109–116. https://doi.org/10.1145/971701.50214Google Scholar
  20. Plank, J.S., Xu, L.H., 2006. Optimizing Cauchy Reed- Solomon codes for fault-tolerant storage applications. Proc. 5th Int. Symp. on Network Computing and Applications, p.173–180. https://doi.org/10.1109/nca.2006.43Google Scholar
  21. Pinheiro, E., Bianchini, R., 2004. Energy conservation techniques for disk array-based servers. Proc. 18th Annual Int. Conf. on Supercomputing, p.68–78. https://doi.org/10.1145/1006209.1006220Google Scholar
  22. Pinheiro, E., Weber, W.D., Barroso, L.A., 2007. Failure trends in a large disk drive population. Proc. 5th USENIX Conf. on File and Storage Technologies, p.17–28.Google Scholar
  23. Soundararajan, G., Prabhakaran, V., Balakrishnan, M., et al., 2010. Extending SSD lifetimes with disk-based write caches. Proc. 8th USENIX Conf. on File and Storage Technologies, p.101–114.Google Scholar
  24. Stodolsky, D., Gibson, G., Holland, M., 1993. Parity logging overcoming the small write problem in redundant disk arrays. ACM SIGARCH Comput. Architect. News, 21(2):64–75. https://doi.org/10.1145/173682.165143CrossRefGoogle Scholar
  25. Thereska, E., Donnelly, A., Narayanan, D., 2011. Sierra: practical power-proportionality for data center storage. Proc. 6th Conf. on Computer Systems, p.169–182. https://doi.org/10.1145/1966445.1966461Google Scholar
  26. Wang, J., Zhu, H.J., Li, D., 2008. eRAID: conserving energy in conventional disk-based RAID system. IEEE Trans. Comput., 57(3):359–374. https://doi.org/10.1109/tc.2007.70821MathSciNetCrossRefGoogle Scholar
  27. Weil, S., Brandt, S.A., Miller, E.L., et al., 2006. Ceph: a scalable, high-performance distributed file system. Proc. 7th Conf. on Operating Systems Design and Implementation, p.307–320.Google Scholar
  28. Wilkes, J., Golding, R., Staelin, R., et al., 1996. The HP AutoRAID hierarchical storage system. ACM Trans. Comput. Syst., 14(1):108–136. https://doi.org/10.1145/225535.225539CrossRefGoogle Scholar
  29. Xin, Q., Miller, E.L., Schwarz, T., et al., 2003. Reliability mechanisms for very large storage systems. Proc. 20th IEEE/11th NASA Goddard Conf. on Mass Storage Systems and Technologie, p.146–156. https://doi.org/10.1109/mass.2003.1194851CrossRefGoogle Scholar
  30. Xu, L.H., Bruck, J., 1999. X-code: MDS array codes with optimal encoding. IEEE Trans. Inform. Theory, 45(1):272–276. https://doi.org/10.1109/18.746809MathSciNetCrossRefzbMATHGoogle Scholar
  31. Yue, Y., Tian, L., Jiang, H., et al., 2010. RoLo: a rotated logging storage architecture for enterprise data centers. Proc. IEEE 30th Int. Conf. on Distributed Computing Systems, p.293–304. https://doi.org/10.1109/ICDCS.2010.22Google Scholar
  32. Yue, Y., He, B., Tian, L., et al., 2016. Rotated logging storage architectures for data centers: models and optimizations. IEEE Trans. Comput., 65(1):203–215. https://doi.org/10.1109/tc.2015.2417539MathSciNetCrossRefzbMATHGoogle Scholar
  33. Zhu, Q., Chen, Z., Tan, L., et al., 2005. Hibernator: helping disk arrays sleep through the winter. Proc. 20th ACM Symp. on Operating Systems Principles, p.177–190. https://doi.org/10.1145/1095809.1095828Google Scholar

Copyright information

© Zhejiang University and Springer-Verlag GmbH Germany, part of Springer Nature 2017

Authors and Affiliations

  1. 1.Wuhan National Laboratory for Optoelectronics, Department of Computer Science and TechnologyHuazhong University of Science and TechnologyWuhanChina

Personalised recommendations