Advertisement

Modular Framework for Data Prefetching and Replacement at the Edge

Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 10973)

Abstract

In this paper, we define and evaluate a Bayesian reasoning based cache management framework to minimize data movement and hence the latency and energy consumption of edge devices when interacting with the cloud to retrieve the needed data. The framework can be implemented either directly as a real cache, or as a virtual cache that acts as an advisor to the real cache. The latter strategy is useful when the real cache already exists and deals with complexities such as pinning/unpinning of some objects. The caching framework makes intelligent prefetching and eviction decisions using contextual and temporal relationships while automatically adjusting its parameters in the background. This flexibility and adjustability is crucial for the edge because of the prevalence of the context dependent and heterogeneous nature of the cloud interaction. The paper shows through several storage traces that the mechanism is at least as good as other state of the art algorithms, and can adapt faster to workload changes.

Notes

Acknowledgments

The authors would like to thank Sam Fineberg, and other colleagues from HPE, Jesse Friedman, Anis Alazzawe, and Alexey Uversky from Temple University, for valuable discussions and contributions in the initial phases of this work.

References

  1. 1.
    Gubbi, J., et al.: Internet of things (IoT): a vision, architectural elements, and future directions. Future Gener. Comput. Syst. 29(7), 1645–1660 (2013)CrossRefGoogle Scholar
  2. 2.
    Rose, K., et al.: The Internet of Things: An Overview. The Internet SocietyGoogle Scholar
  3. 3.
    Kolb, S., Lenhard, J., Wirtz, G.: Application migration effort in the cloud. Serv. Trans. Cloud Comput. 3(4), 1–15 (2015)CrossRefGoogle Scholar
  4. 4.
    Zhang, F., et al.: Edgebuffer: caching and prefetching content at the edge in the mobility first future internet architecture. In: 2015 IEEE 16th International Symposium on World of Wireless, Mobile and Multimedia Networks (WoWMoM), pp. 1–9. IEEE (2015)Google Scholar
  5. 5.
    Gill, B.S., Bathen, L.A.D.: AMP: adaptive multi-stream prefetching in a shared cache. In: FAST, vol. 7, pp. 185–198 (2007)Google Scholar
  6. 6.
    Arı, İ., et al.: ACME: adaptive caching using multiple experts. In: Proceedings in Informatics, vol. 14 (2002)Google Scholar
  7. 7.
    Megiddo, N., Modha, D.S.: ARC: a self-tuning, low overhead replacement cache. In: FAST, vol. 3, pp. 115–130 (2003)Google Scholar
  8. 8.
    Smith, A.J.: Cache memories. ACM Comput. Surv. (CSUR) 14(3), 473–530 (1982)CrossRefGoogle Scholar
  9. 9.
    Tcheun, M.K., et al.: An adaptive sequential prefetching scheme in shared-memory multiprocessors. In: Proceedings of the 1997 International Conference on Parallel Processing, pp. 306–313. IEEE (1997)Google Scholar
  10. 10.
    Pendse, R., Bhagavathula, R.: Pre-fetching with the segmented LRU algorithm. In: 42nd Midwest Symposium on Circuits and Systems, vol. 2 (1999)Google Scholar
  11. 11.
    Gill, B.S., Modha, D.S.: SARC: sequential prefetching in adaptive replacement cache. In: USENIX Annual Technical Conference, General Track, pp. 293–308 (2005)Google Scholar
  12. 12.
    Cao, P., et al.: A study of integrated prefetching and caching strategies. ACM SIGMETRICS Perform. Eval. Rev. 23(1), 188–197 (1995)CrossRefGoogle Scholar
  13. 13.
    Curewitz, K.M., et al.: Practical prefetching via data compression. ACM SIGMOD Rec. 22, 257–266 (1993)CrossRefGoogle Scholar
  14. 14.
    Griffioen, J., Appleton, R.: Performance measurements of automatic prefetching. In: Parallel and Distributed Computing Systems, pp. 165–170 (1995)Google Scholar
  15. 15.
    Madhyastha, T.M.: Automatic Classification of Input/Output Access Patterns. Ph.D. thesis, University of Illinois at Urbana-Champaign (1997)Google Scholar
  16. 16.
    Madhyastha, T.M., Reed, D.A.: Input/output access pattern classification using hidden Markov models. In: Proceedings of the Fifth Workshop on I/O in Parallel and Distributed Systems, pp. 57–67. ACM (1997)Google Scholar
  17. 17.
    Yang, S., et al.: Tombolo: performance enhancements for cloud storage gateways. In: Proceedings of the 32nd International Conference on Massive Storage Systems and Technology (MSST 2016) (2016)Google Scholar
  18. 18.
    Li, Z., et al.: C-miner: Mining block correlations in storage systems. In: FAST, vol. 4, pp. 173–186 (2004)Google Scholar
  19. 19.
    Kuenning, G.H., Popek, G.J.: Automated hoarding for mobile computers, vol. 31. ACM (1997)CrossRefGoogle Scholar
  20. 20.
    Grimsrud, K.S., et al.: Multiple prefetch adaptive disk caching. IEEE Trans. Knowl. Data Eng. 5(1), 88–103 (1993)CrossRefGoogle Scholar
  21. 21.
    Joseph, D., Grunwald, D.: Prefetching using Markov predictors. In: Proceedings of the 24th Annual International Symposium on Computer Architecture, ISCA 1997, pp. 252–263 (1997)Google Scholar
  22. 22.
    Palmer, M., Zdonik, S.B.: Fido: a cache that learns to fetch. Brown University, Department of Computer Science (1991)Google Scholar
  23. 23.
    Kroeger, T.M., Long, D.D.: Design and implementation of a predictive file prefetching algorithm. In: USENIX Annual Technical Conference, General TrackGoogle Scholar
  24. 24.
    He, J., et al.: Knowac: I/O prefetch via accumulated knowledge. In: 2012 IEEE International Conference on Cluster Computing (CLUSTER), pp. 429–437. IEEE (2012)Google Scholar
  25. 25.
    Vitter, J.S., Krishnan, P.: Optimal prefetching via data compression. J. ACM 43(5), 771–793 (1996)MathSciNetCrossRefGoogle Scholar
  26. 26.
    Lei, H., Duchamp, D.: An analytical approach to file prefetching. In: USENIX Annual Technical Conference, pp. 275–288 (1997)Google Scholar
  27. 27.
    Lin, L., et al.: AMP: an affinity-based metadata prefetching scheme in large-scale distributed storage systems. In: 8th IEEE International Symposium on Cluster Computing and the Grid, CCGRID 2008, pp. 459–466. IEEE (2008)Google Scholar
  28. 28.
    Gu, P., et al.: Nexus: a novel weighted-graph-based prefetching algorithm for metadata servers in petabyte-scale storage systems. In: 6th IEEE International Symposium on Cluster Computing and the Grid, CCGRID 2006, vol. 1, pp. 409–416. IEEE (2006)Google Scholar
  29. 29.
    Cortes, T., Labarta, J.: Linear aggressive prefetching: a way to increase the performance of cooperative caches. In: Proceedings of the 13th International Parallel Processing Symposium and 10th Symposium on Parallel and Distributed Processing, IPPS/SPDP, pp. 46–54. IEEE (1999)Google Scholar
  30. 30.
    Dahlgren, F., et al.: Fixed and adaptive sequential prefetching in shared memory multiprocessors. In: International Conference on Parallel Processing, ICPP 1993, vol. 1, pp. 56–63. IEEE (1993)Google Scholar
  31. 31.
    He, J., et al.: I/O acceleration with pattern detection. In: Proceedings of the 22nd International Symposium on High-Performance Parallel and Distributed Computing, pp. 25–36. ACM (2013)Google Scholar
  32. 32.
    Miller, J.A., Ramaswamy, L., Kochut, K.J., Fard, A.: Directions for big data graph analytics research. Int. J. Big Data 2(1) (2015)Google Scholar
  33. 33.
    Griffioen, J., Appleton, R.: Reducing file system latency using a predictive approach. In: USENIX Summer, pp. 197–207 (1994)Google Scholar
  34. 34.
    Oly, J., Reed, D.A.: Markov model prediction of I/O requests for scientific applications. In: Proceedings of the 16th International Conference on Supercomputing, pp. 147–155. ACM (2002)Google Scholar

Copyright information

© Springer International Publishing AG, part of Springer Nature 2018

Authors and Affiliations

  1. 1.Temple UniversityPhiladelphiaUSA
  2. 2.University of FloridaGainesvilleUSA
  3. 3.Hewlett Packard EnterprisePalo AltoUSA

Personalised recommendations