Encyclopedia of Big Data Technologies

2019 Edition
| Editors: Sherif Sakr, Albert Y. Zomaya

TARDiS: A Branch-and-Merge Approach to Weak Consistency

  • Natacha CrooksEmail author
Reference work entry
DOI: https://doi.org/10.1007/978-3-319-77525-8_160

Synonyms

Definitions

This article targets applications that adopt weaker consistency in large-scale distributed systems in favor of higher availability and better performance.

Overview

In light of the conflicting goals of low latency, high availability, and partition tolerance (Brewer 2000; Gilbert and Lynch 2002), many wide-area services and applications choose to renounce strong consistency in favor of eventual (Vogels 2008) or causal consistency (Ahamad et al. 1994): COPS (Lloyd et al. 2011), Dynamo (DeCandia et al. 2007), Riak (Basho 2017), and Voldemort (2012) are among the many services (Lloyd et al. 2011) that provide a form of weak consistency to applications. Applications that make this decision are often referred to as ALPS applications (availability, low latency, partition tolerance, and high scalability). Distributed ALPS applications, however, are hard to reason about: eventual consistency provides no...

This is a preview of subscription content, log in to check access.

References

  1. Ahamad M, Neiger G, Burns J, Kohli P, Hutto P (1994) Causal memory: definitions, implementation and programming. Technical report, Georgia Institute of TechnologyGoogle Scholar
  2. Apache (2017) Cassandra. http://cassandra.apache.org/
  3. Bailis P, Fekete A, Ghodsi A, Hellerstein JM, Stoica I (2012) The potential dangers of causal consistency and an explicit solution. In: Proceedings of the 3rd ACM symposium on cloud computing, SOCC ’12, pp 22:1–22:7. http://doi.acm.org/10.1145/2391229.2391251
  4. Basho (2017) Riak. http://basho.com/products/
  5. Berenson H, Bernstein P, Gray J, Melton J, O’Neil E, O’Neil P (1995) A critique of ANSI SQL isolation levels. In: ACM SIGMOD record, vol 24, pp 1–10CrossRefGoogle Scholar
  6. Brewer EA (2000) Towards robust distributed systems (abstract). In: Proceedings of the 19th ACM symposium on principles of distributed computing, PODC ’00. http://doi.acm.org/10.1145/343477.343502
  7. Cooper BF, Ramakrishnan R, Srivastava U, Silberstein A, Bohannon P, Jacobsen HA, Puz N, Weaver D, Yerneni R (2008) PNUTS: Yahoo!’s hosted data serving platform. Proc VLDB Endow 1(2):1277–1288CrossRefGoogle Scholar
  8. Crooks N, Pu Y, Estrada N, Gupta T, Alvisi L, Clement A (2016) Tardis: a branch-and-merge approach to weak consistency. In: Proceedings of the 2016 international conference on management of data, SIGMOD ’16. ACM, New York, pp 1615–1628. http://doi.acm.org/10.1145/2882903.2882951Google Scholar
  9. DeCandia G, Hastorun D, Jampani M, Kakulapati G, Lakshman A, Pilchin A, Sivasubramanian S, Vosshall P, Vogels W (2007) Dynamo: Amazon’s highly available key-value store. In: Proceedings of 21st ACM symposium on operating systems principles, SOSP ’07, pp 205–220. http://doi.acm.org/10.1145/1294261.1294281
  10. Du J, Iorgulescu C, Roy A, Zwaenepoel W (2014) Gentlerain: cheap and scalable causal consistency with physical clocks. In: Proceedings of the ACM symposium on cloud computing, SOCC ’14, pp 4:1–4:13. http://doi.acm.org/10.1145/2670979.2670983
  11. Gilbert S, Lynch N (2002) Brewer’s conjecture and the feasibility of consistent, available, partition-tolerant web services. SIGACT News 33(2):51–59. http://doi.acm.org/10.1145/564585.564601CrossRefGoogle Scholar
  12. Git (2017) Git: the fast version control system. http://git-scm.com
  13. Lloyd W, Freedman MJ, Kaminsky M, Andersen DG (2011) Don’t settle for eventual: scalable causal consistency for wide-area storage with COPS. In: Proceedings of the 23rd ACM symposium on operating systems principles, SOSP ’11, pp 401–416. http://doi.acm.org/10.1145/2043556.2043593
  14. Lloyd W, Freedman MJ, Kaminsky M, Andersen DG (2013) Stronger semantics for low-latency geo-replicated storage. In: Proceedings of the 10th USENIX symposium on networked systems design and implementation, NSDI ’13, pp 313–328. https://www.usenix.org/conference/nsdi13/technical-sessions/presentation/lloyd
  15. Mahajan P, Setty S, Lee S, Clement A, Alvisi L, Dahlin M, Walfish M (2011) Depot: cloud storage with minimal trust. ACM Trans Comput Syst 29(4):12CrossRefGoogle Scholar
  16. Olson MA, Bostic K, Seltzer M (1999) Berkeley DB. In: Proceedings of the annual conference on USENIX annual technical conference, ATEC ’99. http://dl.acm.org/citation.cfm?id=1268708.1268751
  17. Shapiro M, Preguiça N, Baquero C, Zawirski M (2011) A comprehensive study of convergent and commutative replicated data types. Rapport de recherche RR-7506, INRIAGoogle Scholar
  18. Sovran Y, Power R, Aguilera MK, Li J (2011) Transactional storage for geo-replicated systems. In: Proceedings of the 23rd ACM symposium on operating systems principles, SOSP ’11, pp 385–400. http://doi.acm.org/10.1145/2043556.2043592
  19. Terry DB, Theimer MM, Petersen K, Demers AJ, Spreitzer MJ, Hauser CH (1995) Managing update conflicts in Bayou, a weakly connected replicated storage system. In: Proceedings of the 15th ACM symposium on operating systems principles, SOSP ’95, pp 172–182. http://doi.acm.org/10.1145/224056.224070
  20. Thomas RH (1979) A majority consensus approach to concurrency control for multiple copy databases. ACM Trans Database Syst 4(2):180–209. http://doi.acm.org/10.1145/320071.320076CrossRefGoogle Scholar
  21. Vogels W (2008) Eventually consistent. Queue 6(6): 14–19. http://doi.acm.org/10.1145/1466443.1466448CrossRefGoogle Scholar
  22. Voldemort (2012) Project Voldemort: a distributed database. Online, http://project-voldemort.com/
  23. Wikipedia (2017) Wikipedia: conflicting sources. http://en.wikipedia.org/wiki/Wikipedia:Conflicting_sources

Copyright information

© Springer Nature Switzerland AG 2019

Authors and Affiliations

  1. 1.The University of Texas at AustinAustinUSA