Advertisement

CaLibRe: A Better Consistency-Latency Tradeoff for Quorum Based Replication Systems

  • Sathiya Prabhu KumarEmail author
  • Sylvain Lefebvre
  • Raja Chiky
  • Eric Gressier-Soudan
Conference paper
Part of the Lecture Notes in Computer Science book series (LNCS, volume 9262)

Abstract

In Multi-writer, Multi-reader systems, data consistency is ensured by the number of replica nodes contacted during read and write operations. Contacting a sufficient number of nodes in order to ensure data consistency comes with a communication cost and a risk to data availability. In this paper, we describe an enhancement of a consistency protocol called LibRe, which ensures consistency by contacting a minimum number of replica nodes. Porting the idea of achieving consistent reads with the help of a registry information from the original protocol, the enhancement integrate and distribute the registry inside the storage system in order to achieve better performance.

We propose an initial implementation of the model inside the Cassandra distributed data store and the performance of LibRe incarnation is benchmarked against Cassandra’s native consistency options ONE, ALL and QUORUM. The test results prove that using LibRe protocol, an application would experience a similar number of stale reads compared to strong consistency options offered by Cassandra, while achieving lower latency and similar availability.

Keywords

Distributed storage systems Eventual consistency Quorum systems 

References

  1. 1.
    Abadi, D.J.: Consistency tradeoffs in modern distributed database system design: cap is only part of the story. Computer 45(2), 37–42 (2012)MathSciNetCrossRefGoogle Scholar
  2. 2.
    Agrawal, D., El Abbadi, A.: The generalized tree quorum protocol: an efficient approach for managing replicated data. ACM Trans. Database Syst. 17(4), 689–717 (1992). http://doi.acm.org/10.1145/146931.146935 CrossRefGoogle Scholar
  3. 3.
    Burckhardt, S., Leijen, D., Protzenko, J., Fähndrich, M.: Global sequence protocol: a robust abstraction for replicated shared state. Technical report, Microsoft Research (2015). http://research.microsoft.com/apps/pubs/default.aspx?id=240462
  4. 4.
    Chang, F., Dean, J., Ghemawat, S., Hsieh, W.C., Wallach, D.A., Burrows, M., Chandra, T., Fikes, A., Gruber, R.E.: Bigtable: a distributed storage system for structured data. ACM Trans. Comput. Syst. (TOCS) 26(2), 4 (2008)CrossRefGoogle Scholar
  5. 5.
    Cooper, B.F., Ramakrishnan, R., Srivastava, U., Silberstein, A., Bohannon, P., Jacobsen, H.A., Puz, N., Weaver, D., Yerneni, R.: Pnuts: Yahoo!’s hosted data serving platform. Proc. VLDB Endow. 1(2), 1277–1288 (2008). http://dx.doi.org/10.14778/1454159.1454167 CrossRefGoogle Scholar
  6. 6.
    Cooper, B.F., Silberstein, A., Tam, E., Ramakrishnan, R., Sears, R.: Benchmarking cloud serving systems with YCSB. In: Proceedings of the 1st ACM Symposium on Cloud Computing, SoCC 2010, pp. 143–154. ACM, New York (2010). http://doi.acm.org/10.1145/1807128.1807152
  7. 7.
    DeCandia, G., Hastorun, D., Jampani, M., Kakulapati, G., Lakshman, A., Pilchin, A., Sivasubramanian, S., Vosshall, P., Vogels, W.: Dynamo: amazon’s highly available key-value store. SIGOPS Oper. Syst. Rev. 41(6), 205–220 (2007). http://doi.acm.org/10.1145/1323293.1294281 CrossRefGoogle Scholar
  8. 8.
    George, L.: HBase: The Definitive Guide, 1st edn. O’Reilly Media, Sebastopol (2011)Google Scholar
  9. 9.
    Gilbert, S., Lynch, N.: Brewer’s conjecture and the feasibility of consistent, available, partition-tolerant web services. SIGACT News 33(2), 51–59 (2002). http://doi.acm.org/10.1145/564585.564601 CrossRefGoogle Scholar
  10. 10.
    Hunt, P., Konar, M., Junqueira, F.P., Reed, B.: Zookeeper: wait-free coordination for internet-scale systems. In: Proceedings of the 2010 USENIX Conference on USENIX Annual Technical Conference, USENIXATC 2010, p. 11. USENIX Association, Berkeley (2010). http://dl.acm.org/citation.cfm?id=1855840.1855851
  11. 11.
    Klophaus, R.: Riak core: building distributed applications without shared state. In: ACM SIGPLAN Commercial Users of Functional Programming, CUFP 2010, p. 14:1. ACM, New York (2010). http://doi.acm.org/10.1145/1900160.1900176
  12. 12.
    Kumar, S.P., Chiky, R., Lefebvre, S., Soudan, E.G.: Libre: a consistency protocol for modern storage systems. In: Proceedings of the 6th ACM India Computing Convention, Compute 2013, pp. 8:1–8:9. ACM, New York (2013). http://doi.acm.org/10.1145/2522548.2522605
  13. 13.
    Kumar, S., Lefebvre, S., Chiky, R., Soudan, E.: Evaluating consistency on the fly using YCSB. IWCIM 2014, 1–6 (2014)Google Scholar
  14. 14.
    Lakshman, A., Malik, P.: Cassandra: a decentralized structured storage system. SIGOPS Oper. Syst. Rev. 44(2), 35–40 (2010). http://doi.acm.org/10.1145/1773912.1773922 CrossRefGoogle Scholar
  15. 15.
    Malkhi, D., Reiter, M.: Byzantine quorum systems. In: STOC 1997, pp. 569–578. ACM (1997). http://doi.acm.org/10.1145/258533.258650
  16. 16.
    Malkhi, D., Reiter, M., Wright, R.: Probabilistic quorum systems. In: PODC 1997, pp. 267–273. ACM (1997). http://doi.acm.org/10.1145/259380.259458
  17. 17.
    Shapiro, M., Preguica, N., Baquero, C., Zawirski, M.: A comprehensive study of convergent and commutative replicated data types. RR-7506, INRIA (2011)Google Scholar
  18. 18.
    Naor, M., Wool, A.: The load, capacity, and availability of quorum systems. SIAM J. Comput. 27(2), 423–447 (1998). http://dx.doi.org/10.1137/S0097539795281232 MathSciNetCrossRefzbMATHGoogle Scholar
  19. 19.
    Saito, Y., Shapiro, M.: Optimistic replication. ACM Comput. Surv. 37(1), 42–81 (2005). http://doi.acm.org/10.1145/1057977.1057980 CrossRefGoogle Scholar
  20. 20.
    Shvachko, K., Kuang, H., Radia, S., Chansler, R.: The hadoop distributed file system. In: 2010 IEEE 26th Symposium on Mass Storage Systems and Technologies (MSST), pp. 1–10, May 2010Google Scholar
  21. 21.
    Tlili, M., Akbarinia, R., Pacitti, E., Valduriez, P.: Scalable P2P reconciliation infrastructure for collaborative text editing. In: 2010 Second International Conference on Advances in Databases Knowledge and Data Applications (DBKDA), pp. 155–164, April 2010Google Scholar
  22. 22.
    Voldemort, P.: Physical architecture options, April 2015. http://www.project-voldemort.com/voldemort/design.html
  23. 23.
    Vukolic, M.: Remarks: the origin of quorum systems. Bull. EATCS 102, 109–110 (2010). http://dblp.uni-trier.de/db/journals/eatcs/eatcs102.html MathSciNetzbMATHGoogle Scholar
  24. 24.
    Zhang, H., Wen, Y., Xie, H., Yu, N.: Distributed Hash Table - Theory, Platforms and Applications. Springer Briefs in Computer Science. Springer, New York (2013). http://dx.doi.org/10.1007/978-1-4614-9008-1 CrossRefGoogle Scholar

Copyright information

© Springer International Publishing Switzerland 2015

Authors and Affiliations

  • Sathiya Prabhu Kumar
    • 1
    • 2
    Email author
  • Sylvain Lefebvre
    • 1
  • Raja Chiky
    • 1
  • Eric Gressier-Soudan
    • 2
  1. 1.LISITE LaboratoryISEP ParisParisFrance
  2. 2.CEDRIC LaboratoryCNAM ParisParisFrance

Personalised recommendations