Advertisement

Cluster Computing

, Volume 18, Issue 4, pp 1581–1593 | Cite as

SLA-aware data migration in a shared hybrid storage cluster

  • Jianzhe Tai
  • Bo Sheng
  • Yi Yao
  • Ningfang MiEmail author
Article

Abstract

Data volume in today’s world has been tremendously increased. Large-scaled and diverse data sets are raising new big challenges of storage, process, and query. Particularly, real-time data analysis becomes more and more frequently. Multi-tiered, hybrid storage architectures, which provide a solid way to combine solid-state drives with hard disk drives (HDDs), therefore become attractive in enterprise data centers for achieving high performance and large capacity simultaneously. However, from service provider’s perspective, how to efficiently manage all the data hosted in data center in order to provide high quality of service (QoS) is still a core and difficult problem. The modern enterprise data centers often provide the shared storage resources to a large variety of applications which might demand for different service level agreements (SLAs). Furthermore, any user query from a data-intensive application could easily trigger a scan of a gigantic data set and inject a burst of disk I/Os to the back-end storage system, which will eventually cause disastrous performance degradation. Therefore, in the paper, we present a new approach for automated data movement in multi-tiered, hybrid storage clusters, which lively migrates the data among different storage media devices, aiming to support multiple SLAs for applications with dynamic workloads at the minimal cost. Detailed trace-driven simulations show that this new approach significantly improves the overall performance, providing higher QoS for applications and reducing the occurrence of SLA violations. Sensitivity analysis under different system environments further validates the effectiveness and robustness of the approach.

Keywords

Data migration Resource allocation  Service level agreement (SLA) Bursty workloads  Hybrid storage clusters 

Notes

Acknowledgments

This work was partially supported by NSF Grant CNS-1251129 and IBM Faculty Award.

References

  1. 1.
    Anderson, E., Hall, J., Hartline, J., Hobbs, M., Karlin, A.R., Saia, J., Swaminathan, R., Wilkes, J.: An experimental study of data migration algorithms. In: Proceedings of the Workshop on Algorithm Engineering, pp. 145–158. Springer, London (2001)Google Scholar
  2. 2.
    Elmore, A.J., Das, S., Agrawal, D., El Abbadi, A.: Zephyr: live migration in shared nothing databases for elastic cloud platforms. In: Proceedings of the 2011 ACM SIGMOD International Conference on Management of data, pp. 301–312. ACM, New York (2011)Google Scholar
  3. 3.
  4. 4.
    Guerra, J., Pucha, H., Glider, J., Belluomini, W., Rangaswami, R.: Cost effective storage using extent based dynamic tiering. In: Proceedings of the 9st USENIX Conference on FAST’11, pp. 20–20. ACM, San Jose, CA (2011)Google Scholar
  5. 5.
  6. 6.
  7. 7.
    Karlsson, M., Karamanolis, C., Zhu, X.: Triage: performance isolation and differentiation for storage systems. In: Proceedings of the Twelfth IEEE International Workshop on Quality of Service, Palo Alto, CA, pp. 67–74. IEEE (2004)Google Scholar
  8. 8.
    Khuller, S., Kim, Y., Wan, Y.: Algorithms for data migration with cloning. In: Proceedings of the Twenty-Second ACM SIGMOD-SIGACT-SIGART Symposium on Principles of Database Systems, pp. 27–36. ACM, San Diego, CA (2003)Google Scholar
  9. 9.
    Laliberte, B.: Automate and Optimize a Tiered Storage Environment-FAST! White Paper (2009). http://www.emc.com/collateral/analyst-reports/esg-20091208-fast.pdf
  10. 10.
    Little, J.D.C.: A proof for the queuing formula: \(\text{ L }=\text{ w }\). Oper. Res. 9(3), 383–387 (1961)MathSciNetCrossRefGoogle Scholar
  11. 11.
    Lu, C., Alvarez, G.A., Wilkes, J.: Aqueduct: online data migration with performance guarantees. In: Proceedings of the 1st USENIX Conference on FAST’02, pp. 219–230. ACM, Monterey, CA (2002)Google Scholar
  12. 12.
    Lundell, B., Gahm, J., McKnight, J.: 2011 IT Spending Intentions Survey. Research Report (2011). http://www.enterprisestrategygroup.com/2011/01/2011-it-spending-intentions-survey/
  13. 13.
    Narayanan, D., Donnelly, A., Rowstron, A.: Write off-loading: practical power management for enterprise storage. ACM Trans. Storage 4(3), 10:1–10:23 (2008)CrossRefGoogle Scholar
  14. 14.
    Riska, A., Riedel, E.: Long-range dependence at the disk drive level. In: Proceedings of theThird International Conference on Quantitative Evaluation of Systems, QEST 2006, pp. 41–50. IEEE (2006)Google Scholar
  15. 15.
    Seo, B., Zimmermann, R.: Efficient disk replacement and data migration algorithms for large disk subsystems. ACM Trans. Storage 1(3), 316–345 (2005)CrossRefGoogle Scholar
  16. 16.
    Sundaram, V., Shenoy, P.: Efficient data migration in self-managing storage systems. In: Proceedings of the IEEE International Conference on Autonomic Computing, Dublin, pp. 297–300 (2006)Google Scholar
  17. 17.
  18. 18.
    Wang, K., Lin, M., Ciucu, F.: Characterizing the impact of the workload on the value of dynamic resizing in data centers. Perform. Eval. Rev. 40(1), 405–406 (2014)CrossRefGoogle Scholar
  19. 19.
    Zhang, G., Chiu, L., Liu, L.: Adaptive data migration in multi-tiered storage based cloud environment. In: Proceedings of the IEEE 3rd International Conference on Cloud Computing, Miami, FL, pp. 148–155. IEEE (2010)Google Scholar

Copyright information

© Springer Science+Business Media New York 2015

Authors and Affiliations

  1. 1.Northeastern UniversityBostonUSA
  2. 2.University of Massachusetts BostonBostonUSA

Personalised recommendations