Advertisement

Parallelizing Inline Data Reduction Operations for Primary Storage Systems

  • Jeonghyeon MaEmail author
  • Chanik Park
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10421)

Abstract

Data reduction operations such as deduplication and compression are widely used to save storage capacity in primary storage system. These operations are compute-intensive. High performance storage devices like SSDs are widely used in most primary storage systems. Therefore, data reduction operations become a performance bottleneck in SSD-based primary storage systems.

In this paper, we propose a parallel data reduction technique on data deduplication and compression utilizing both multi-core CPU and GPU in an integrated manner. First, we introduce bin-based data deduplication, a parallel technique on deduplication, where CPU-based parallelism is mainly applied whereas GPU is utilized as co-processor of CPU. Second, we also propose a parallel technique on compression, where main computation is done by GPU while CPU is responsible only for post-processing. Third, we propose a parallel technique handling both deduplication and compression in an integrated manner, where our technique controls when and how to use GPU. Experimental evaluation shows that our proposed techniques can achieve 15.0%, 88.3%, and 89.7% better throughput than the case where only CPU is applied for deduplication, compression, and integrated data reductions, respectively. Our proposed technique enables easy application of data reduction operations to SSD-based primary storage systems.

Keywords

Primary storage Inline data reduction scheme GPU 

References

  1. 1.
    Guo, F., Efstathopoulos, P.: Building a high-performance deduplication system: In: USENIX Annual Technical Conference (2011)Google Scholar
  2. 2.
    De Agostino, S.: Lempel-Ziv data compression on parallel and distributed systems. Algorithms 4, 183–199 (2011)MathSciNetCrossRefGoogle Scholar
  3. 3.
    Ozsoy, A., Swany, M., Chauhan, A.: Pipelined parallel LZSS for streaming data compression on GPGPUs. In: Parallel and Distributed Systems, pp. 37–44 (2012)Google Scholar
  4. 4.
    Berryman, A., Calyam, P., Honigford, M., Lai, A.M.: Vdbench: a benchmarking toolkit for thin-client based virtual desktop environments. In: Cloud Computing Technology and Science, pp. 480–487 (2010)Google Scholar
  5. 5.
    Constantinescu, C., Glider, J., Chambliss, D.: Mixing deduplication and compression on active data sets: In: Data Compression Conference, pp. 393–402 (2011)Google Scholar
  6. 6.
    Xia, W., Jiang, H., Feng, D., Tian, L., Fu, M., Wang, Z.: P-dedupe: exploiting parallelism in data deduplication system: In: Networking, Architecture and Storage, pp. 338–347 (2012)Google Scholar
  7. 7.
    Kim, C., Park, K.W., Park, K.H.: GHOST: GPGPU-offloaded high performance storage I/O deduplication for primary storage system. In: Proceedings of the International Workshop on Programming Models and Applications for Multicores and Manycores, pp. 17–26 (2012)Google Scholar
  8. 8.
    Klein, S.T., Wiseman, Y.: Parallel Lempel Ziv coding (extended abstract). In: Amir, A. (ed.) CPM 2001. LNCS, vol. 2089, pp. 18–30. Springer, Heidelberg (2001). doi: 10.1007/3-540-48194-X_2 CrossRefGoogle Scholar
  9. 9.
    Navarro, G., Raffinot, M.: Practical and flexible pattern matching over Ziv-Lempel compressed text. J. Discrete Algorithms 2, 347–371 (2004)MathSciNetCrossRefzbMATHGoogle Scholar

Copyright information

© Springer International Publishing AG 2017

Authors and Affiliations

  1. 1.Department of Computer Science and EngineeringPOSTECHPohangSouth Korea

Personalised recommendations