Encyclopedia of Big Data Technologies

Living Edition
| Editors: Sherif Sakr, Albert Zomaya

Achieving Low Latency Transactions for Geo-replicated Storage with Blotter

Living reference work entry
DOI: https://doi.org/10.1007/978-3-319-63962-8_158-1



Blotter is a protocol for executing transactions in geo-replicated storage systems with non-monotonic snapshot isolation semantics. A geo-replicated storage system is composed by a set of nodes running in multiple data centers located in different geographical locations. The nodes in each data center replicate either all or a subset of the data items in the database, leading to a full replication or partial replication approach. Blotter was primarily designed for full replication scenarios but can also be used in partial replication scenarios. Under non-monotonic snapshot isolation semantics, a transaction reads from a snapshot that reflects all the writes from a set of transactions that includes, at least, all locally committed transactions and remote transactions known when the transaction starts. Two concurrent transactions conflict if their...

This is a preview of subscription content, log in to check access.



Computing resources for this work were provided by an AWS in Education Research Grant. The research of R. Rodrigues is funded by the European Research Council (ERC-2012-StG-307732) and by FCT (UID/CEC/50021/2013). This work was partially supported by NOVA LINCS (UID/CEC/04516/2013) and EU H2020 LightKone project (732505). This chapter is derived from Moniz et al. (2017).


  1. Ananthanarayanan R, Basker V, Das S, Gupta A, Jiang H, Qiu T, Reznichenko A, Ryabkov D, Singh M, Venkataraman S (2013) Photon: fault- tolerant and scalable joining of continuous data streams. In: SIGMOD’13: proceeding of 2013 international conference on management of data, pp 577–588Google Scholar
  2. Baker J, Bond C, Corbett JC, Furman J, Khorlin A, Larson J, Leon JM, Li Y, Lloyd A, Yushprakh V (2011) Megastore: providing scalable, highly available storage for interactive services. In: Proceeding of the conference on innovative data system research (CIDR), pp 223–234. http://www.cidrdb.org/cidr2011/Papers/CIDR11_Paper32.pdf
  3. Bronson et al N (2013) Tao: facebook’s distributed data store for the social graph. In: Proceeding of the 2013 USENIX annual technical conference, pp 49–60Google Scholar
  4. Chang F, Dean J, Ghemawat S, Hsieh WC, Wallach DA, Burrows M, Chandra T, Fikes A, Gruber RE (2008) Bigtable: a distributed storage system for structured data. ACM Trans Comput Syst 26(2):4:1–4:26.  http://doi.acm.org/10.1145/1365815.1365816
  5. Corbett et al JC (2012) Spanner: Google’s globally-distributed database. In: Proceeding of the 10th USENIX conference on operating systems design and implementation, OSDI’12, pp 251–264. http://dl.acm.org/citation.cfm?id=2387880.2387905
  6. DeCandia et al G (2007) Dynamo: Amazon’s highly available key-value store. In: Proceeding of the 21st ACM symposium on operating systems principles, pp 205–220.  http://doi.acm.org/10.1145/1294261.1294281
  7. Elnikety S, Zwaenepoel W, Pedone F (2005) Database replication using generalized snapshot isolation. In: Proceedings of the 24th IEEE symposium on reliable distributed systems, SRDS’05. IEEE Computer Society, Washington, DC, pp 73–84.  https://doi.org/10.1109/RELDIS.2005.14
  8. Hoff T (2009) Latency is everywhere and it costs you sales – how to crush it. Post at the high scalability blog. http://tinyurl.com/5g8mp2
  9. Kraska T, Pang G, Franklin MJ, Madden S, Fekete A (2013) MDCC: multi-data center consistency. In: Proceeding of the 8th ACM European conference on computer systems, EuroSys’13, pp 113–126.  http://doi.acm.org/10.1145/2465351.2465363
  10. Lakshman A, Malik P (2010) Cassandra: a decentralized structured storage system. SIGOPS Oper Syst Rev 44(2):35–40.  http://doi.acm.org/10.1145/1773912.1773922 CrossRefGoogle Scholar
  11. Lamport L (1978) Time, clocks, and the ordering of events in a distributed system. Commun ACM 21(7):558–565.  http://doi.acm.org/10.1145/359545.359563 CrossRefzbMATHGoogle Scholar
  12. Lamport L (1998) The part-time parliament. ACM Trans Comput Syst 16(2):133–169.  http://doi.acm.org/10.1145/279227.279229 CrossRefGoogle Scholar
  13. Lamport L, Malkhi D, Zhou L (2010) Reconfiguring a state machine. ACM SIGACT News 41(1):63–73CrossRefGoogle Scholar
  14. Lloyd W, Freedman MJ, Kaminsky M, Andersen DG (2013) Stronger semantics for low-latency geo-replicated storage. In: Proceeding of the 10th USENIX conference on networked systems design and implementation, NSDI’13, pp 313–328. http://dl.acm.org/citation.cfm?id=2482626.2482657
  15. Mahmoud H, Nawab F, Pucher A, Agrawal D, El Abbadi A (2013) Low-latency multi-datacenter databases using replicated commit. Proc VLDB Endow 6(9):661–672. http://dl.acm.org/citation.cfm?id=2536360.2536366 CrossRefGoogle Scholar
  16. Moniz H, Leitão J, Dias RJ, Gehrke J, Preguiça N, Rodrigues R (2017) Blotter: low latency transactions for geo-replicated storage. In: Proceedings of the 26th international conference on World Wide Web, International World Wide Web conferences steering committee, WWW ’17, Perth, pp 263–272.  https://doi.org/10.1145/3038912.3052603
  17. Saeida Ardekani M, Sutra P, Shapiro M (2013a) Non-monotonic snapshot isolation: scalable and strong consistency for geo-replicated transactional systems. In: Proceeding of the 32nd IEEE symposium on reliable distributed systems (SRDS 2013), pp 163–172.  https://doi.org/10.1109/SRDS.2013.25
  18. Saeida Ardekani M, Sutra P, Shapiro M, Preguiça N (2013b) On the scalability of snapshot isolation. In: Euro-Par 2013 parallel processing. LNCS, vol 8097. Springer, pp 369–381.  https://doi.org/10.1007/978-3-642-40047-6_39
  19. Schneider FB (1990) Implementing fault-tolerant services using the state machine approach: a tutorial. ACM Comput Surv 22(4):299–319.  http://doi.acm.org/10.1145/98163.98167 CrossRefGoogle Scholar
  20. Shute J, Vingralek R, Samwel B, Handy B, Whipkey C, Rollins E, Oancea M, Littlefield K, Menestrina D, Ellner S, Cieslewicz J, Rae I, Stancescu T, Apte H (2013) F1: a distributed SQL database that scales. Proc VLDB Endow 6(11):1068–1079.  https://doi.org/10.14778/2536222.2536232 CrossRefGoogle Scholar
  21. Sovran Y, Power R, Aguilera MK, Li J (2011) Transactional storage for geo-replicated systems. In: Proceeding of the 23rd ACM symposium on operating systems principles, SOSP’11, pp 385–400.  http://doi.acm.org/10.1145/2043556.2043592
  22. Zhang Y, Power R, Zhou S, Sovran Y, Aguilera M, Li J (2013) Transaction chains: achieving serializability with low latency in geo-distributed storage systems. In: Proceedings of the 24th ACM symposium on operating systems principles, SOSP, pp 276–291.  http://doi.acm.org/10.1145/2517349.2522729

Authors and Affiliations

  1. 1.Google and NOVA LINCSNew YorkUSA
  2. 2.DI/FCT/Universidade NOVA de Lisboa and NOVA LINCSLisbonPortugal
  3. 3.SUSE Linux GmbH and NOVA LINCSLisbonPortugal
  4. 4.MicrosoftSeattleUSA
  5. 5.Instituto Superior Técnico (ULisboa) and INESC-IDLisbonPortugal

Section editors and affiliations

  • Asterios Katsifodimos
    • 1
  • Pramod Bhatotia
    • 2
  1. 1.Delft University of TechnologyDelftNetherlands
  2. 2.School of InformaticsUniversity of EdinburghEdinburghUnited Kingdom