Abstract
Scaling machine learning (ML) methods to learn from large datasets requires devising distributed data architectures and algorithms to support their iterative nature where the same data records are processed several times. Data caching becomes key to minimize data transmission through iterations at each node and, thus, contribute to the overall scalability. In this work we propose a two level caching architecture (disk and memory) and benchmark different caching strategies in a distributed machine learning setup over a cluster with no shared RAM memory. Our results strongly favour strategies where (1) datasets are partitioned and preloaded throughout the distributed memory of the cluster nodes and (2) algorithms use data locality information to synchronize computations at each iteration. This supports the convergence towards models where “ computing goes to data” as observed in other Big Data contexts, and allows us to align strategies for parallelizing ML algorithms and configure appropriately computing infrastructures.
This is a preview of subscription content, log in via an institution.
Buying options
Tax calculation will be finalised at checkout
Purchases are for personal use only
Learn about institutional subscriptionsPreview
Unable to display preview. Download preview PDF.
References
Bottou, L.: Large-scale machine learning with stochastic gradient descent. In: Proceedings of COMPSTAT 2010, pp. 177–186. Springer (2010)
Chu, C., Kim, S.K., Lin, Y.-A., Yu, Y., Bradski, G., Ng, A.Y., Olukotun, K.: Map-reduce for machine learning on multicore. Advances in neural information processing systems 19, 281 (2007)
Coates, A., Huval, B., Wang, T., Wu, D., Catanzaro, B., Andrew, N.: Deep learning with cots hpc systems. In: Proceedings of the 30th International Conference on Machine Learning, pp. 1337–1345 (2013)
Dean, J., Corrado, G., Monga, R., Chen, K., Devin, M., Mao, M., Senior, A., Tucker, P., Yang, K., Le, Q.V., et al.: Large scale distributed deep networks. In: Advances in Neural Information Processing Systems, pp. 1223–1231 (2012)
Hsu, D., Karampatziakis, N., Langford, J., Smola, A.J.: Parallel online learning. CoRR, abs/1103.4204 (2011)
Kraska, T., Talwalkar, A., Duchi, J.C., Griffith, R., Franklin, M.J., Jordan, M.I.: Mlbase: A distributed machine-learning system. In: CIDR (2013)
LeCun, Y., Bottou, L., Bengio, Y., Haffner, P.: Gradient-based learning applied to document recognition. Proceedings of the IEEE 86(11), 2278–2324 (1998)
Navruzyan, A.: Online machine learning with distributed in-memory clusters (2013)
Ramos-Pollan, R., Cruz-Roa, A., Gonzalez, F.A.: A framework for high performance image analysis pipelines. In: 2012 7th Colombian Computing Congress (CCC), pp. 1–6 (October 2012)
Ramos-Pollan, R., Gonzalez, F.A., Caicedo, J.C., Cruz-Roa, A., Camargo, J.E., Vanegas, J.A., Perez, S.A., Bermeo, J.D., Otalora, J.S., Rozo, P.K., Arevalo, J.: Bigs: A framework for large-scale image processing and analysis over distributed and heterogeneous computing resources. In: 2012 IEEE 8th International Conference on E-Science (e-Science), pp. 1–8 (October 2012)
Rosen, J., Polyzotis, N., Borkar, V., Bu, Y., Carey, M.J., Weimer, M., Condie, T., Ramakrishnan, R.: Iterative mapreduce for large scale machine learning. arXiv preprint arXiv:1303.3517 (2013)
Shvachko, K., Kuang, H., Radia, S., Chansler, R.: The hadoop distributed file system. In: 2010 IEEE 26th Symposium on Mass Storage Systems and Technologies (MSST), pp. 1–10. IEEE (2010)
Zaharia, M., Chowdhury, M., Franklin, M.J., Shenker, S., Stoica, I.: Spark: cluster computing with working sets. In: Proceedings of the 2nd USENIX Conference on Hot Topics in Cloud Computing, p. 10 (2010)
Author information
Authors and Affiliations
Editor information
Editors and Affiliations
Rights and permissions
Copyright information
© 2014 Springer-Verlag Berlin Heidelberg
About this paper
Cite this paper
Ovalle, J.E.A., Ramos-Pollan, R., González, F.A. (2014). Distributed Cache Strategies for Machine Learning Classification Tasks over Cluster Computing Resources. In: Hernández, G., et al. High Performance Computing. CARLA 2014. Communications in Computer and Information Science, vol 485. Springer, Berlin, Heidelberg. https://doi.org/10.1007/978-3-662-45483-1_4
Download citation
DOI: https://doi.org/10.1007/978-3-662-45483-1_4
Publisher Name: Springer, Berlin, Heidelberg
Print ISBN: 978-3-662-45482-4
Online ISBN: 978-3-662-45483-1
eBook Packages: Computer ScienceComputer Science (R0)