Review for Paper: 19-Eventual Consistency Today: Limitations, Extensions, and Beyond

Review 1

Review 1

Single-server DBMSs provide strong consistency, but in a distributed setting, it is not possible to ensure strong consistency and high availability in the presence of network failures. According to Brewer's CAP theorem, one cannot simultaneously guarantee consistency and availability if the network may be partitioned (divided by a cut between sets of nodes). In particular, while a partition exists, writes must wait because their effects cannot be propagated to nodes across the partition, or else consistency is lost. Many applications such as social networks and e-commerce platforms need high availability and partition tolerance, and may be willing to sacrifice some consistency.

Eventual consistency is a paradigm for distributed databases, where if no additional updates are made, all replicas will eventually converge to a matching state. In “Eventual Consistency Today,” the authors discuss various consistency models such as causality and eventual consistency, and present an argument that it is possible to build reliable applications on an eventually consistent platform. This article presents measures of consistency, such as probabilistically bounded staleness (PBS), which means that a certain fraction of reads return the most recent version, starting a certain length of time after the latest write. The article also discusses the theory of how to work with eventually consistent DBMSs, using CALM operators that append to but never alter previous state.

The main contributions of the paper are a review of the definitions of consistency properties, measures for how consistent a system is in practice, and theoretical and practical techniques for managing eventually consistent data. The paper does not propose new methods, but it covers many tools such as ACID 2.0 operators that make working with eventually consistent stores easier. ACID 2.0 functions are order-insensitive and can be called more than once without altering their effects, making them easier to reason about, if nodes may temporarily miss updates from other nodes.

The paper largely argues in favor of eventual consistency as a viable tool. But eventual consistency has serious drawbacks compared with stricter consistency models. Under eventual consistency, as noted in the paper, simultaneous transactions could lead to dirty reads and inconsistent behavior in an accounting system. Not all organizations are willing to handle this problem “outside the system” at the application level. Moreover, working with eventual consistency imposes some burden on the application programmer.


Review 2

Single-server DBMSs provide strong consistency, but in a distributed setting, it is not possible to ensure strong consistency and high availability in the presence of network failures. According to Brewer's CAP theorem, one cannot simultaneously guarantee consistency and availability if the network may be partitioned (divided by a cut between sets of nodes). In particular, while a partition exists, writes must wait because their effects cannot be propagated to nodes across the partition, or else consistency is lost. Many applications such as social networks and e-commerce platforms need high availability and partition tolerance, and may be willing to sacrifice some consistency.

Eventual consistency is a paradigm for distributed databases, where if no additional updates are made, all replicas will eventually converge to a matching state. In “Eventual Consistency Today,” the authors discuss various consistency models such as causality and eventual consistency, and present an argument that it is possible to build reliable applications on an eventually consistent platform. This article presents measures of consistency, such as probabilistically bounded staleness (PBS), which means that a certain fraction of reads return the most recent version, starting a certain length of time after the latest write. The article also discusses the theory of how to work with eventually consistent DBMSs, using CALM operators that append to but never alter previous state.

The main contributions of the paper are a review of the definitions of consistency properties, measures for how consistent a system is in practice, and theoretical and practical techniques for managing eventually consistent data. The paper does not propose new methods, but it covers many tools such as ACID 2.0 operators that make working with eventually consistent stores easier. ACID 2.0 functions are order-insensitive and can be called more than once without altering their effects, making them easier to reason about, if nodes may temporarily miss updates from other nodes.

The paper largely argues in favor of eventual consistency as a viable tool. But eventual consistency has serious drawbacks compared with stricter consistency models. Under eventual consistency, as noted in the paper, simultaneous transactions could lead to dirty reads and inconsistent behavior in an accounting system. Not all organizations are willing to handle this problem “outside the system” at the application level. Moreover, working with eventual consistency imposes some burden on the application programmer.


Review 2

The article is a pragmatic opinion on the eventual consistency models of today. It progresses in a question-answer fashion answering three major questions based on performance, implementation and trade-offs if a higher isolation is achieved. The motivation of the article is to address the different aspects of eventual consistency model in use today and throws light on the advantages of using such a technique. According to Brewer’s conjecture, we can choose only two of the following - Consistency, Availability and Partition-tolerance. For achieving high-availability we have to trade off the consistency aspect of the system.

The concept of eventual consistency has been studied and deployed since the 1970s in various forms but it has become more prominent with the highly scalable NoSQL stores. The changes made to one of the servers propagate to all the replicas eventually providing high-availability. It is fairly straightforward to implement with the writes required to exchange information with each other. It makes use of liveness concept which states that “something good eventually happens”. The level of eventual consistency is measured by Probabilistically Bounded Staleness method which provides metrics providing info such as how many reads will return the most recent version in a fixed amount of time. This showed eventual consistency to be often strongly consistent.

The article discusses the concept of eventual consistency in a fair light and provides basic terminology related to it along with a quick intro on the models existing today. The authors are humorous at times and provide an interesting read.

However, the article do not provide much insight into statistics involved with the usage of this concept. With the tradeoff with the consistency model, not many services will adopt this method to replace their current systems.



The article is a pragmatic opinion on the eventual consistency models of today. It progresses in a question-answer fashion answering three major questions based on performance, implementation and trade-offs if a higher isolation is achieved. The motivation of the article is to address the different aspects of eventual consistency model in use today and throws light on the advantages of using such a technique. According to Brewer’s conjecture, we can choose only two of the following - Consistency, Availability and Partition-tolerance. For achieving high-availability we have to trade off the consistency aspect of the system.

The concept of eventual consistency has been studied and deployed since the 1970s in various forms but it has become more prominent with the highly scalable NoSQL stores. The changes made to one of the servers propagate to all the replicas eventually providing high-availability. It is fairly straightforward to implement with the writes required to exchange information with each other. It makes use of liveness concept which states that “something good eventually happens”. The level of eventual consistency is measured by Probabilistically Bounded Staleness method which provides metrics providing info such as how many reads will return the most recent version in a fixed amount of time. This showed eventual consistency to be often strongly consistent.

The article discusses the concept of eventual consistency in a fair light and provides basic terminology related to it along with a quick intro on the models existing today. The authors are humorous at times and provide an interesting read.

However, the article do not provide much insight into statistics involved with the usage of this concept. With the tradeoff with the consistency model, not many services will adopt this method to replace their current systems.



Review 3

Review 3

What is the problem addressed?
Even though eventual consistency is lack of useful guarantee, why are scores of usable applications a profitable businesses built on top of eventually consistent infrastructure? This article begins to answer this question by describing several notable developments in the
theory and practice of eventual consistency, with a focus on immediately applicable takeaways for practitioners running distributed systems in the wild. As production deployments have increasingly adopted weak consistency models such as eventual consistency, we have learned several lessons about how to reason about, program, and strengthen these weak models. They primarily focus on three questions and some preliminary answers:
How eventual is eventual consistency?
How should one program under eventual consistency?
Is it possible to provide stronger guarantees than eventual consistency without losing its
benefits?

Why important?
In a July 2000 conference keynote, Eric Brewer publicly postulated the CAP (consistency, availability, and partition tolerance) theorem, which would change the landscape of how distributed storage systems were architected. Brewer’s conjecture proved prescient: in the following decade, with the continued rise of large-scale Internet services, distributed-system architects frequently dropped “strong” guarantees in favor of weaker models—the most notable being eventual consistency.

1-­‐2 main technical contributions? Describe.
Despite the lack of safety guarantees, eventually consistent data stores are widely deployed. Why? The survey discuss that even eventual consistency doesn’t promise safety doesn’t mean safety isn’t often provided, and provides measure and predict these properties of eventually consistent system. The two main kinds of mechanisms for quantifying eventual consistency are measurement and prediction. Measurement answers the question, “How consistent is my store under my given workload right now?” while prediction answers the question, “How consistent will my store be under a given configuration and workload?” Measurement is useful for runtime monitoring and alerts or verifying compliance with SLOs (service-level objectives). Prediction is useful for probabilistic what-if analyses such as the effect of configuration and workload changes and for dynamically tuning system behavior.

1-­‐2 weaknesses or open questions? Describe and discuss
This survey gives an throughout overview of eventual consistency. However I couldn't find "formal" definition of eventual consistency. Maybe the author thought reader are already familiar with that.



What is the problem addressed?
Even though eventual consistency is lack of useful guarantee, why are scores of usable applications a profitable businesses built on top of eventually consistent infrastructure? This article begins to answer this question by describing several notable developments in the
theory and practice of eventual consistency, with a focus on immediately applicable takeaways for practitioners running distributed systems in the wild. As production deployments have increasingly adopted weak consistency models such as eventual consistency, we have learned several lessons about how to reason about, program, and strengthen these weak models. They primarily focus on three questions and some preliminary answers:
How eventual is eventual consistency?
How should one program under eventual consistency?
Is it possible to provide stronger guarantees than eventual consistency without losing its
benefits?

Why important?
In a July 2000 conference keynote, Eric Brewer publicly postulated the CAP (consistency, availability, and partition tolerance) theorem, which would change the landscape of how distributed storage systems were architected. Brewer’s conjecture proved prescient: in the following decade, with the continued rise of large-scale Internet services, distributed-system architects frequently dropped “strong” guarantees in favor of weaker models—the most notable being eventual consistency.

1-­‐2 main technical contributions? Describe.
Despite the lack of safety guarantees, eventually consistent data stores are widely deployed. Why? The survey discuss that even eventual consistency doesn’t promise safety doesn’t mean safety isn’t often provided, and provides measure and predict these properties of eventually consistent system. The two main kinds of mechanisms for quantifying eventual consistency are measurement and prediction. Measurement answers the question, “How consistent is my store under my given workload right now?” while prediction answers the question, “How consistent will my store be under a given configuration and workload?” Measurement is useful for runtime monitoring and alerts or verifying compliance with SLOs (service-level objectives). Prediction is useful for probabilistic what-if analyses such as the effect of configuration and workload changes and for dynamically tuning system behavior.

1-­‐2 weaknesses or open questions? Describe and discuss
This survey gives an throughout overview of eventual consistency. However I couldn't find "formal" definition of eventual consistency. Maybe the author thought reader are already familiar with that.



Review 4

Review 4

This paper introduces a new technique called Eventual Consistency, which provides fewer guarantees that add reads to a given data item will eventually return the same if no additional updates are made to it. In general, it gives an introduction to several developments of eventually consistent systems, including how and why eventually consistent systems are programmed, deployed and evolved. The paper first gives an overview about history and concepts of eventual consistency, as well as how to implement eventual consistency. Then it discusses more about how eventual is eventual consistency, including metrics and mechanisms, and probabilistically bounded staleness (PBS). Following that is the related programming issues. Finally, it discusses the limits and future of eventual consistency.

There is not much general problem here, but this discussion comes from the evolution of eventual consistency. Eric Brewer postulated CAP (consistency, availability and partition tolerance) theorem, which is based on that distributed systems requiring always-on, highly available operation cannot guarantee the consistent single-system operations. And later distributed-systems take use of weaker models, and eventual consistency is one of them.

The major contribution of the paper is that it provides a big picture for eventual consistency. It provides the necessary backgrounds and concepts for understanding how eventual consistency works. Also it presents several interesting discussions related to the design of eventual consistency. Here we will summarize the key components:

1. Eventual consistency is straightforward to implement. Replicas exchange information with others about which writes they have seen (anti-entropy)
2. Eventual consistency is purely a liveness property, which does not guarantee safety.
3. Metrics and mechanisms
4. Discussion about stronger eventually consistency, including compensating actions and CALM/CRDTs

One interesting observation: this paper is in general good to provide a background about eventual consistency. It would be better if it provides more detailed examples rather than all concepts and abstract description. For example, there are several systems using eventual consistency. It would be great if the paper could mention those systems and gives an overview about how they implement the eventual consistency and how the results meet the system requirements, etc.


This paper introduces a new technique called Eventual Consistency, which provides fewer guarantees that add reads to a given data item will eventually return the same if no additional updates are made to it. In general, it gives an introduction to several developments of eventually consistent systems, including how and why eventually consistent systems are programmed, deployed and evolved. The paper first gives an overview about history and concepts of eventual consistency, as well as how to implement eventual consistency. Then it discusses more about how eventual is eventual consistency, including metrics and mechanisms, and probabilistically bounded staleness (PBS). Following that is the related programming issues. Finally, it discusses the limits and future of eventual consistency.

There is not much general problem here, but this discussion comes from the evolution of eventual consistency. Eric Brewer postulated CAP (consistency, availability and partition tolerance) theorem, which is based on that distributed systems requiring always-on, highly available operation cannot guarantee the consistent single-system operations. And later distributed-systems take use of weaker models, and eventual consistency is one of them.

The major contribution of the paper is that it provides a big picture for eventual consistency. It provides the necessary backgrounds and concepts for understanding how eventual consistency works. Also it presents several interesting discussions related to the design of eventual consistency. Here we will summarize the key components:

1. Eventual consistency is straightforward to implement. Replicas exchange information with others about which writes they have seen (anti-entropy)
2. Eventual consistency is purely a liveness property, which does not guarantee safety.
3. Metrics and mechanisms
4. Discussion about stronger eventually consistency, including compensating actions and CALM/CRDTs

One interesting observation: this paper is in general good to provide a background about eventual consistency. It would be better if it provides more detailed examples rather than all concepts and abstract description. For example, there are several systems using eventual consistency. It would be great if the paper could mention those systems and gives an overview about how they implement the eventual consistency and how the results meet the system requirements, etc.


Review 5

Review 5

This paper describes the 'eventually consistent' concept and explains while many applications prefer eventually consistent databases while they do provide very weak consistency guaranty. Based on the CAP (consistency, Availability, Partition tolerance) theorem, it is impossible to achieve always-on experience with all reads consistent while the system experience partial failures. So, applications which need high availability seem to have to sacrifice consistency for that.

Also, the authors describe how programmers should develop their applications and what points should they consider under eventually consistency. They basically say that, programmers should either limit themselves to some certain datastructures that avoid inconsistency, or they should handle the possible inconsistencies inside their application.

Overall, it is a really good article which clearly describes related issues with eventually consistent data systems.


This paper describes the 'eventually consistent' concept and explains while many applications prefer eventually consistent databases while they do provide very weak consistency guaranty. Based on the CAP (consistency, Availability, Partition tolerance) theorem, it is impossible to achieve always-on experience with all reads consistent while the system experience partial failures. So, applications which need high availability seem to have to sacrifice consistency for that.

Also, the authors describe how programmers should develop their applications and what points should they consider under eventually consistency. They basically say that, programmers should either limit themselves to some certain datastructures that avoid inconsistency, or they should handle the possible inconsistencies inside their application.

Overall, it is a really good article which clearly describes related issues with eventually consistent data systems.


Review 6

Review 6

This paper addresses the problem of eventual consistency. If you do not have a guarantee on consistency, than how can you write applications that are correct? Well, it turns out, for many cases it doesn't matter. SSI is not necessary in many cases, for example, in social news feed, it doesn't matter if some statuses show up out of order, or take 5 minutes to propagate to all users as long as user can see that update at some point.

The paper then goes into how eventual consistency is implemented. Each machine receives a write, and stores it locally. It then responds to the user, and asynchronously alerts the other servers in the cluster of the write. To increase durability, you can alter the system to not respond to the user until the write as been written to some W number of machines, which would allow you to survive W-1 machines failing. The authors spend some time discussing PBS, probabilistically bounded staleness. This is a way of measuring, for some X number of seconds after a write, some percentage Y of reads will return the most recent version. They discuss further extensions to eventual consistency, such as casual consistency, which guarantees that writes are seen in order.

One of the ideas that I found most interesting in this paper is that by giving up on consistency, and relaxing requirements, we can actually, on average, have better consistency. This is because the freedom from locking, latching, and other traditional overhead, while it guarantees consistency, dramatically slows down the system. If you just let the writes propagate through the system, and don't worry about locks, you can actually reach a consistent state faster by avoiding the overhead. This is, obviously, probabilistic, and not a guarantee on performance. The performance is such an improvement, however, that the authors purpose it as a counter-example to the traditional ATM example. With a 99.9% chance that a read will be correct, an overdraft fee can probably take care of the few cases that do occur.

This paper doesn't really make any technical contributions, but it does provide a great summary of the field and a glimpse into the future. I really liked the way this paper was written. It was easy to read, and explained complex topics in a simple, natural way. I really would have liked to have seen more information on how other systems are implemented, and how they performed. This article only talked about Cassandra and Dynamo - what have other systems done differently?


This paper addresses the problem of eventual consistency. If you do not have a guarantee on consistency, than how can you write applications that are correct? Well, it turns out, for many cases it doesn't matter. SSI is not necessary in many cases, for example, in social news feed, it doesn't matter if some statuses show up out of order, or take 5 minutes to propagate to all users as long as user can see that update at some point.

The paper then goes into how eventual consistency is implemented. Each machine receives a write, and stores it locally. It then responds to the user, and asynchronously alerts the other servers in the cluster of the write. To increase durability, you can alter the system to not respond to the user until the write as been written to some W number of machines, which would allow you to survive W-1 machines failing. The authors spend some time discussing PBS, probabilistically bounded staleness. This is a way of measuring, for some X number of seconds after a write, some percentage Y of reads will return the most recent version. They discuss further extensions to eventual consistency, such as casual consistency, which guarantees that writes are seen in order.

One of the ideas that I found most interesting in this paper is that by giving up on consistency, and relaxing requirements, we can actually, on average, have better consistency. This is because the freedom from locking, latching, and other traditional overhead, while it guarantees consistency, dramatically slows down the system. If you just let the writes propagate through the system, and don't worry about locks, you can actually reach a consistent state faster by avoiding the overhead. This is, obviously, probabilistic, and not a guarantee on performance. The performance is such an improvement, however, that the authors purpose it as a counter-example to the traditional ATM example. With a 99.9% chance that a read will be correct, an overdraft fee can probably take care of the few cases that do occur.

This paper doesn't really make any technical contributions, but it does provide a great summary of the field and a glimpse into the future. I really liked the way this paper was written. It was easy to read, and explained complex topics in a simple, natural way. I really would have liked to have seen more information on how other systems are implemented, and how they performed. This article only talked about Cassandra and Dynamo - what have other systems done differently?


Review 7

Review 7

The problem put forward in the paper is that since eventual consistency provides very limit safety guarantees, according to Brew’s CAP theorem, why and how does a lot of usable applications and profitable businesses are built on top of eventually consistency infrastructure. In this paper, it provides the background and architecture information of the eventually consistent system to help understand the way and the reason to program, deploy and evolve it. Also, the paper discusses the future of the eventual consistent system.

Eventual consistency means all servers eventually converge to the same state. There exist consistency-availability and consistency-latency tradeoffs. Instead of sacrificing the user experience of the performance, it weighs less on the guarantees of safety. Thus eventual consistency can support both availability and high performance. And eventually, after all the updates are done, the users can get consistent data. The implementation of the architecture is that when a server writes locally, it broadcasts the write in the cluster in the background and the anti-entropy process is asynchronous. To ensure the convergence to consistency, replicas communicate with each other about their writes. The system often adopts “last writer wins” rule.

Though eventual consistency is that it has no safety guarantees, eventually consistent systems are widely used because it works well in practice even without safety property. The metrics for eventual consistency are time for a successful write and versions length for a read.The mechanisms for quantifying are the measurement for runtime handling and prediction for probabilistic analysis. PBS is used for the quantification. To improve the safety of eventual consistency, several methods are created. Compensation is a way to achieve safety by compensating incorrect actions. It ensures the mistakes are eventually corrected, but does not guarantee that no mistakes are made. Whether to use eventual consistency is decided by its benefit against the cost of compensation for the inconsistency anomalies. Its shortcoming is that it requires dealing with inconsistencies outside the system. CALM is used to test the safety of operations in the eventual consistent system. And CRDTs are datatypes that can never produce any safety violations. But they limit the operations.

The strength of the paper is that it provides detailed information about the eventual consistency, including how to quantify it and how to handle it. For certain models or theorem, like PBS, it gives the background and the way to apply it instead of using it directly in the paper. It makes the paper accessible to the reader.

For the weakness, I do not think the compensation mechanism could meet the needs of applications which requires safety guarantees since it is more like a apology after doing something bad. Though the mistake is corrected, it was bad indeed. It is a make up solution with low cost, but it is not perfect.


The problem put forward in the paper is that since eventual consistency provides very limit safety guarantees, according to Brew’s CAP theorem, why and how does a lot of usable applications and profitable businesses are built on top of eventually consistency infrastructure. In this paper, it provides the background and architecture information of the eventually consistent system to help understand the way and the reason to program, deploy and evolve it. Also, the paper discusses the future of the eventual consistent system.

Eventual consistency means all servers eventually converge to the same state. There exist consistency-availability and consistency-latency tradeoffs. Instead of sacrificing the user experience of the performance, it weighs less on the guarantees of safety. Thus eventual consistency can support both availability and high performance. And eventually, after all the updates are done, the users can get consistent data. The implementation of the architecture is that when a server writes locally, it broadcasts the write in the cluster in the background and the anti-entropy process is asynchronous. To ensure the convergence to consistency, replicas communicate with each other about their writes. The system often adopts “last writer wins” rule.

Though eventual consistency is that it has no safety guarantees, eventually consistent systems are widely used because it works well in practice even without safety property. The metrics for eventual consistency are time for a successful write and versions length for a read.The mechanisms for quantifying are the measurement for runtime handling and prediction for probabilistic analysis. PBS is used for the quantification. To improve the safety of eventual consistency, several methods are created. Compensation is a way to achieve safety by compensating incorrect actions. It ensures the mistakes are eventually corrected, but does not guarantee that no mistakes are made. Whether to use eventual consistency is decided by its benefit against the cost of compensation for the inconsistency anomalies. Its shortcoming is that it requires dealing with inconsistencies outside the system. CALM is used to test the safety of operations in the eventual consistent system. And CRDTs are datatypes that can never produce any safety violations. But they limit the operations.

The strength of the paper is that it provides detailed information about the eventual consistency, including how to quantify it and how to handle it. For certain models or theorem, like PBS, it gives the background and the way to apply it instead of using it directly in the paper. It makes the paper accessible to the reader.

For the weakness, I do not think the compensation mechanism could meet the needs of applications which requires safety guarantees since it is more like a apology after doing something bad. Though the mistake is corrected, it was bad indeed. It is a make up solution with low cost, but it is not perfect.


Review 8

Review 8

This paper goes over the history and benefits of eventual consistency, how it started and what are its characteristics, benefits and limitations. The paper is a mostly high-level overview of the idea with little technical details, but serve as a good insight into the context of how eventual consistency was developed and what are the different efforts currently surrounding it.

Eventual consistency is a consistency model that guarantees that updates are eventually propagated across all the items, but not immediately. It follows the CAP (consistency, availability, partition tolerance) theorem that there is a tradeoff between always-on experience and being able to read up-to-date data, if the data is distributed and there are regular partition failures. Eventual consistency is embraced to maximize user experience at a cost of “correctness”, and it has been shown that it is “good enough” for most real-world workloads. The metric that decides whether or not it is “good enough” is defined by B-CR, where B is the benefit of weak consistency, C is the cost of each inconsistent anomaly, and R is the rate of the anomaly. Many real-world system has shown that eventual consistency works for many web workloads, and there have been metrics such as PBS that have been designed to evaluate it.

One interesting point brought out by the paper are the limitations of eventual consistency. While eventual consistency works well, there are workloads that it cannot support by its nature. Staleness guarantees are impossible, so many common conditions that specify constraint on data recency or data size/uniqueness cannot be supported.

The paper does a good job providing a nice high level overview and the history behind the theory and implementation of eventual consistency. However, it would have been nice if it included more hard examples of eventual consistency in action, and maybe even some strong case studies.



This paper goes over the history and benefits of eventual consistency, how it started and what are its characteristics, benefits and limitations. The paper is a mostly high-level overview of the idea with little technical details, but serve as a good insight into the context of how eventual consistency was developed and what are the different efforts currently surrounding it.

Eventual consistency is a consistency model that guarantees that updates are eventually propagated across all the items, but not immediately. It follows the CAP (consistency, availability, partition tolerance) theorem that there is a tradeoff between always-on experience and being able to read up-to-date data, if the data is distributed and there are regular partition failures. Eventual consistency is embraced to maximize user experience at a cost of “correctness”, and it has been shown that it is “good enough” for most real-world workloads. The metric that decides whether or not it is “good enough” is defined by B-CR, where B is the benefit of weak consistency, C is the cost of each inconsistent anomaly, and R is the rate of the anomaly. Many real-world system has shown that eventual consistency works for many web workloads, and there have been metrics such as PBS that have been designed to evaluate it.

One interesting point brought out by the paper are the limitations of eventual consistency. While eventual consistency works well, there are workloads that it cannot support by its nature. Staleness guarantees are impossible, so many common conditions that specify constraint on data recency or data size/uniqueness cannot be supported.

The paper does a good job providing a nice high level overview and the history behind the theory and implementation of eventual consistency. However, it would have been nice if it included more hard examples of eventual consistency in action, and maybe even some strong case studies.



Review 9

Review 9

This paper discusses several issues about the eventual consistency, such as "what is the eventual state?", "How eventual can it be?”. The paper also addresses the implementation problems and briefly introduces slightly stronger consistency model.

Eventual consistency only guarantees liveness property and any value can be read before data replicas finally converges to consistent state. Since the eventual consistency does not need to guarantee the safety, updates only need to modify a single replica and return immediately with bounded latency. Eventual consistency, though it sacrifices the data consistency and put durability to risk, provide high availability, which is most-desired feature in many services.

In practice, eventual consistency is usually strongly consistent. This paper reveals that the inconsistency time window of the system with eventual consistency protocol is usually small and "good enough" in practice. Using probabilistically bounded staleness(PBS) prediction model, the measured result from this paper shows that in production, the eventually consistent stores are often faster than their strongly consistent counterpart.

Eventual consistency let the application deal with the conflicts. Since the eventually consistent system is not consistent in any given time, applications should consider anything bad that may happen among the operations done in the server. It needs the programmer to figure out how to solve or compensate the possible conflicts. This paper also provides a way to avoid conflicts: CALM or CRDT.

In all, this paper shows many insights about the eventual consistency. As mentioned above, it provides the measurement and prediction methods to figure out eventual consistent time in practical and gives tested result in the popular real eventual consistency systems. In addition, it summarizes several good approaches to program over such eventual consistency programs, such as CRDT with the interesting ACID 2.0 metric.

Though this paper illustrates well in many essential issues on eventual consistency, it might blur in several details:
1.Though it provides several good application programming strategy to avoid conflicts, the paper omits discussions on the common strategies to solve the conflicts. This part may show us more insight to the workload for eventual consistent systems and a more common way to program on such systems.
2.When it comes to the anti-entropy, the methods discussed in the paper is only the asynchronous broadcast but there are other ways, such as the “repair on write” technique implemented in Dynamo. More generally, the paper failed to discuss the exact implementations of how to achieve the eventual states.


This paper discusses several issues about the eventual consistency, such as "what is the eventual state?", "How eventual can it be?”. The paper also addresses the implementation problems and briefly introduces slightly stronger consistency model.

Eventual consistency only guarantees liveness property and any value can be read before data replicas finally converges to consistent state. Since the eventual consistency does not need to guarantee the safety, updates only need to modify a single replica and return immediately with bounded latency. Eventual consistency, though it sacrifices the data consistency and put durability to risk, provide high availability, which is most-desired feature in many services.

In practice, eventual consistency is usually strongly consistent. This paper reveals that the inconsistency time window of the system with eventual consistency protocol is usually small and "good enough" in practice. Using probabilistically bounded staleness(PBS) prediction model, the measured result from this paper shows that in production, the eventually consistent stores are often faster than their strongly consistent counterpart.

Eventual consistency let the application deal with the conflicts. Since the eventually consistent system is not consistent in any given time, applications should consider anything bad that may happen among the operations done in the server. It needs the programmer to figure out how to solve or compensate the possible conflicts. This paper also provides a way to avoid conflicts: CALM or CRDT.

In all, this paper shows many insights about the eventual consistency. As mentioned above, it provides the measurement and prediction methods to figure out eventual consistent time in practical and gives tested result in the popular real eventual consistency systems. In addition, it summarizes several good approaches to program over such eventual consistency programs, such as CRDT with the interesting ACID 2.0 metric.

Though this paper illustrates well in many essential issues on eventual consistency, it might blur in several details:
1.Though it provides several good application programming strategy to avoid conflicts, the paper omits discussions on the common strategies to solve the conflicts. This part may show us more insight to the workload for eventual consistent systems and a more common way to program on such systems.
2.When it comes to the anti-entropy, the methods discussed in the paper is only the asynchronous broadcast but there are other ways, such as the “repair on write” technique implemented in Dynamo. More generally, the paper failed to discuss the exact implementations of how to achieve the eventual states.


Review 10

Review 10

Problem/Summary:
Traditional databases focused on providing the ACID properties: Atomicity, Consistency, Insulation, and Durability, to database operations. However, this caused databases to have a lot of overhead, especially when running on distributed environments. The CAP theorem states that it is impossible to guarantee both Consistency and Availability in a system that has Partitions. In order to improve availability, many developers have been adhering to a lower standard of consistency known as Eventual Consistency. This paper describes the performance of Eventual Consistency, and how stronger guarantees can be achieved without sacrificing availability.

In Eventual Consistency, operations that change data will be served by a node that has a replica of the data, which then passes the changes on to the other nodes. Eventual Consistency does not require all data replicas to have identical information by the end of an operation- a common model is to require only W replicas to receive the changes before returning successfully. This allows faster response time for operations, but this means that someone reading data is not guaranteed to receive the latest version of the data, since the replica that was read might have not received an update from another modified node yet. The time that updates take to propagate can be predicted or measured, and both methods result in around 200 ms of delay at the 99.9 percentile. This suggests that these systems are strongly consistent for the vast majority of the time.

When inconsistencies do occur, companies may either build in a resolution mechanism, or just manually correct the error depending on the potential consequences.

The paper also mentions that for some operations, the order of operation does not matter (e.g. incrementing a variable). This means that the programmer can perform these operations without worrying about resolving conflicts when data is modified in two different places at once. The paper also mentions that Causal Consistency, and lower levels of ACID like Read Committed and Repeatable Read can be implemented on partitioned systems while still providing high availability.

Strengths:

This paper is extremely readable, and it gives a very understandable overview of the motivations behind Eventual Consistency and how its unique problems are solved.

Weaknesses/Open Questions:

The paper is a survey so it was overly general in some parts.



Problem/Summary:
Traditional databases focused on providing the ACID properties: Atomicity, Consistency, Insulation, and Durability, to database operations. However, this caused databases to have a lot of overhead, especially when running on distributed environments. The CAP theorem states that it is impossible to guarantee both Consistency and Availability in a system that has Partitions. In order to improve availability, many developers have been adhering to a lower standard of consistency known as Eventual Consistency. This paper describes the performance of Eventual Consistency, and how stronger guarantees can be achieved without sacrificing availability.

In Eventual Consistency, operations that change data will be served by a node that has a replica of the data, which then passes the changes on to the other nodes. Eventual Consistency does not require all data replicas to have identical information by the end of an operation- a common model is to require only W replicas to receive the changes before returning successfully. This allows faster response time for operations, but this means that someone reading data is not guaranteed to receive the latest version of the data, since the replica that was read might have not received an update from another modified node yet. The time that updates take to propagate can be predicted or measured, and both methods result in around 200 ms of delay at the 99.9 percentile. This suggests that these systems are strongly consistent for the vast majority of the time.

When inconsistencies do occur, companies may either build in a resolution mechanism, or just manually correct the error depending on the potential consequences.

The paper also mentions that for some operations, the order of operation does not matter (e.g. incrementing a variable). This means that the programmer can perform these operations without worrying about resolving conflicts when data is modified in two different places at once. The paper also mentions that Causal Consistency, and lower levels of ACID like Read Committed and Repeatable Read can be implemented on partitioned systems while still providing high availability.

Strengths:

This paper is extremely readable, and it gives a very understandable overview of the motivations behind Eventual Consistency and how its unique problems are solved.

Weaknesses/Open Questions:

The paper is a survey so it was overly general in some parts.



Review 11

Review 11

The article discusses eventual consistency, which is the model that is frequently found in distributed systems. Due to the nature of distributed systems, it seems more preferable to drop the usual “strong” guarantees that can be found in traditional DBMSs or single-node systems in order to achieve always-on and highly available operations. We have seen this in the previous Dynamo paper. This article talks about eventual consistency itself, why eventual consistency is used and when and how to use it.

Eventual consistency became popular as a result of distributed database system designers’ effort to find a weaker consistency model that can achieve both high availability and performance. While the model has been prominent, the article questions whether how “eventual” is acceptable in the model and discusses ways to implement it in practice. The article suggests that implementing eventual consistency is fairly easy. Read or write requests are served at locally first and propagated to replicas in an asynchronous fashion. Programmers only need to make sure the convergence is done correctly, which is often called “anti-entropy”.

The article points out that eventual consistency only guarantees a “liveness” property and makes no “safety” guarantee, meaning eventual consistency guarantees nothing about with respect what happens to the system until it eventually converges to the final state. Even with this lack of “safety” guarantee, the paper states that eventual consistency is acceptable because it is “good enough” in the practice. I thought it sounded a bit lame, but the article continues with the actual measurement of consistency in a production system. For example, consistent data is returned within 13.6 ms 99.9 percent of the time for LinkedIn’s data stores and even faster with SSD storages. The sheer number might look like eventual consistency is indeed good enough, but the analysis of the rate of inconsistent data found in that time period and also the cost involving correcting inconsistent data (i.e., compensation) is excluded. This makes the numbers reported in the article less significant.

The article also talks about a logical monotonicity as a criterion for deciding whether a system can be run safely on eventually consistent data store. In other words, a database with a logical monotonicity should not return different responses to a same request over time. I do not think most of distributed systems will satisfy this criterion unless they are almost read-only, making some sorts of compensation methodology necessary and hence the discussion of the logical monotonicity may not be that useful in practice.

In conclusion, the article provides a good overview of eventual consistency in terms of its background, characteristics and implications. Personally, I am not fully convinced with the article’s arguments about eventual consistency being “good enough” in practice and the logical monotonicity under CALM theorem. While I am not disagreeing with what the authors state, the reasoning seems to have several holes in the article.




The article discusses eventual consistency, which is the model that is frequently found in distributed systems. Due to the nature of distributed systems, it seems more preferable to drop the usual “strong” guarantees that can be found in traditional DBMSs or single-node systems in order to achieve always-on and highly available operations. We have seen this in the previous Dynamo paper. This article talks about eventual consistency itself, why eventual consistency is used and when and how to use it.

Eventual consistency became popular as a result of distributed database system designers’ effort to find a weaker consistency model that can achieve both high availability and performance. While the model has been prominent, the article questions whether how “eventual” is acceptable in the model and discusses ways to implement it in practice. The article suggests that implementing eventual consistency is fairly easy. Read or write requests are served at locally first and propagated to replicas in an asynchronous fashion. Programmers only need to make sure the convergence is done correctly, which is often called “anti-entropy”.

The article points out that eventual consistency only guarantees a “liveness” property and makes no “safety” guarantee, meaning eventual consistency guarantees nothing about with respect what happens to the system until it eventually converges to the final state. Even with this lack of “safety” guarantee, the paper states that eventual consistency is acceptable because it is “good enough” in the practice. I thought it sounded a bit lame, but the article continues with the actual measurement of consistency in a production system. For example, consistent data is returned within 13.6 ms 99.9 percent of the time for LinkedIn’s data stores and even faster with SSD storages. The sheer number might look like eventual consistency is indeed good enough, but the analysis of the rate of inconsistent data found in that time period and also the cost involving correcting inconsistent data (i.e., compensation) is excluded. This makes the numbers reported in the article less significant.

The article also talks about a logical monotonicity as a criterion for deciding whether a system can be run safely on eventually consistent data store. In other words, a database with a logical monotonicity should not return different responses to a same request over time. I do not think most of distributed systems will satisfy this criterion unless they are almost read-only, making some sorts of compensation methodology necessary and hence the discussion of the logical monotonicity may not be that useful in practice.

In conclusion, the article provides a good overview of eventual consistency in terms of its background, characteristics and implications. Personally, I am not fully convinced with the article’s arguments about eventual consistency being “good enough” in practice and the logical monotonicity under CALM theorem. While I am not disagreeing with what the authors state, the reasoning seems to have several holes in the article.




Review 12

Review 12

This paper defines the eventual consistency model and explores the limits of the model. Eventual consistency is the idea that when a change is made to a data item, the change will propagate to every replica of the item at some later point in time. It is a weaker consistency model because it does not guarantee that after an update is made, all the replicas are in agreement and, as a result, can lead to inconsistent reads if a replica is still outdated. However, eventual consistency is a popular constituency model to use because it provides high availability and guarantees that, eventually, the system will agree on the state of an object.

It seems odd that eventual consistency would be used in practice, but the paper gives several examples of how the benefits of the model outweigh the cost of inconsistency. One example is Facebook updates. If a user wishes to update their status, they want their friends to also see the update. A strong consistency model would only allow the update to be posted once it is sure every user can see it, but, as the paper states, this can hurt user experience and increase latency. Although it may be easier to reason about a system that is strongly consistent, the existence of the system is to support users and provide good user experience through high availability, thus the need for eventual consistency.

The paper also discusses two other ideas, CALM and ACID 2.0. CALM is a theorem that gives programmers the ability to determine which operations and programs are safe in an eventually consistent environment. ACID 2.0 is a set of design patters and stands for associativity, commutativity, idempotence, and distributed. Using all the properties of ACID 2.0 ensures that a program will pass the CALM tests and be safe in an eventually consistent model. CALM and ACID 2.0 allow for a strong consistency guarantee on higher level concepts such as program semantics like a counter is always increasing, even though the actual value of the counter may be in doubt.

The paper closes with a discussion of stronger than eventual models. Casual consistency ensures that the logical ordering of transactions, reads before writes, will always be maintained. Isolation levels below serializable also guarantee safety in the form of transaction isolation. Current work in the field shows that it is possible to support lower isolation levels with high data availability.

Although providing stricter conditions is desirable, I feel like there was a lack of discussion of work in bounding consistency. It seems like work isn't focused on bounding it since tens of milliseconds is acceptable, but this can't possible always be true for all applications of the model. In the discussion of how to measure eventuality, the paper discusses tracking drift as a measure of how far off a value is from the true value. One idea I had was to create a system that combines eventual consistency with strict consistency. Allow the database to be eventual and measure the drift of the value. Once the value drifts too much from the true value, update the replicas so they are consistency once again. In this way, we can bound the eventuality. One alternative would be to track version counts instead of drift and collapse the versions once the count is too high, but I feel like this would be susceptible to high overhead from too many versions being created and collapsed. I think this system may provide stronger guarantees than eventual consistency in that the database is always going to be consistent at certain points in time, but it would still have the benefits of the weaker model.


This paper defines the eventual consistency model and explores the limits of the model. Eventual consistency is the idea that when a change is made to a data item, the change will propagate to every replica of the item at some later point in time. It is a weaker consistency model because it does not guarantee that after an update is made, all the replicas are in agreement and, as a result, can lead to inconsistent reads if a replica is still outdated. However, eventual consistency is a popular constituency model to use because it provides high availability and guarantees that, eventually, the system will agree on the state of an object.

It seems odd that eventual consistency would be used in practice, but the paper gives several examples of how the benefits of the model outweigh the cost of inconsistency. One example is Facebook updates. If a user wishes to update their status, they want their friends to also see the update. A strong consistency model would only allow the update to be posted once it is sure every user can see it, but, as the paper states, this can hurt user experience and increase latency. Although it may be easier to reason about a system that is strongly consistent, the existence of the system is to support users and provide good user experience through high availability, thus the need for eventual consistency.

The paper also discusses two other ideas, CALM and ACID 2.0. CALM is a theorem that gives programmers the ability to determine which operations and programs are safe in an eventually consistent environment. ACID 2.0 is a set of design patters and stands for associativity, commutativity, idempotence, and distributed. Using all the properties of ACID 2.0 ensures that a program will pass the CALM tests and be safe in an eventually consistent model. CALM and ACID 2.0 allow for a strong consistency guarantee on higher level concepts such as program semantics like a counter is always increasing, even though the actual value of the counter may be in doubt.

The paper closes with a discussion of stronger than eventual models. Casual consistency ensures that the logical ordering of transactions, reads before writes, will always be maintained. Isolation levels below serializable also guarantee safety in the form of transaction isolation. Current work in the field shows that it is possible to support lower isolation levels with high data availability.

Although providing stricter conditions is desirable, I feel like there was a lack of discussion of work in bounding consistency. It seems like work isn't focused on bounding it since tens of milliseconds is acceptable, but this can't possible always be true for all applications of the model. In the discussion of how to measure eventuality, the paper discusses tracking drift as a measure of how far off a value is from the true value. One idea I had was to create a system that combines eventual consistency with strict consistency. Allow the database to be eventual and measure the drift of the value. Once the value drifts too much from the true value, update the replicas so they are consistency once again. In this way, we can bound the eventuality. One alternative would be to track version counts instead of drift and collapse the versions once the count is too high, but I feel like this would be susceptible to high overhead from too many versions being created and collapsed. I think this system may provide stronger guarantees than eventual consistency in that the database is always going to be consistent at certain points in time, but it would still have the benefits of the weaker model.


Review 13

Review 13

For large-scale Internet services that utilize distributed systems, it is crucial to provide "always-on" functionalities. However, this usually contradicts with the consistency of the system since maintaining consistent data requires additional work and might need some blocking strategy.

In this paper, they presents some introductions to "eventually consistent" systems. Such systems provides high availablility and low latency while reducing the consistency to be "good enough". The idea of "eventually consistent" is that the updates from the clients are performed on a single node and then return immediately before the changes safely propagate to all the other duplicates. The information exchanging process takes place afterwards, for example, by using a "last writer wins" strategy.

However, the "eventually consistent" techniques are shown to have strong consistency in practice. Some metrics such as time and number of versions can be used to estimate how consistent is the system.
As a result, "eventually consistent" systems are actually very favorable. This is probably another example of the "KISS" idea.


For large-scale Internet services that utilize distributed systems, it is crucial to provide "always-on" functionalities. However, this usually contradicts with the consistency of the system since maintaining consistent data requires additional work and might need some blocking strategy.

In this paper, they presents some introductions to "eventually consistent" systems. Such systems provides high availablility and low latency while reducing the consistency to be "good enough". The idea of "eventually consistent" is that the updates from the clients are performed on a single node and then return immediately before the changes safely propagate to all the other duplicates. The information exchanging process takes place afterwards, for example, by using a "last writer wins" strategy.

However, the "eventually consistent" techniques are shown to have strong consistency in practice. Some metrics such as time and number of versions can be used to estimate how consistent is the system.
As a result, "eventually consistent" systems are actually very favorable. This is probably another example of the "KISS" idea.


For large-scale Internet services that utilize distributed systems, it is crucial to provide "always-on" functionalities. However, this usually contradicts with the consistency of the system since maintaining consistent data requires additional work and might need some blocking strategy.

In this paper, they presents some introductions to "eventually consistent" systems. Such systems provides high availablility and low latency while reducing the consistency to be "good enough". The idea of "eventually consistent" is that the updates from the clients are performed on a single node and then return immediately before the changes safely propagate to all the other duplicates. The information exchanging process takes place afterwards, for example, by using a "last writer wins" strategy.

However, the "eventually consistent" techniques are shown to have strong consistency in practice. Some metrics such as time and number of versions can be used to estimate how consistent is the system.
As a result, "eventually consistent" systems are actually very favorable. This is probably another example of the "KISS" idea.


For large-scale Internet services that utilize distributed systems, it is crucial to provide "always-on" functionalities. However, this usually contradicts with the consistency of the system since maintaining consistent data requires additional work and might need some blocking strategy.

In this paper, they presents some introductions to "eventually consistent" systems. Such systems provides high availablility and low latency while reducing the consistency to be "good enough". The idea of "eventually consistent" is that the updates from the clients are performed on a single node and then return immediately before the changes safely propagate to all the other duplicates. The information exchanging process takes place afterwards, for example, by using a "last writer wins" strategy.

However, the "eventually consistent" techniques are shown to have strong consistency in practice. Some metrics such as time and number of versions can be used to estimate how consistent is the system.
As a result, "eventually consistent" systems are actually very favorable. This is probably another example of the "KISS" idea.


For large-scale Internet services that utilize distributed systems, it is crucial to provide "always-on" functionalities. However, this usually contradicts with the consistency of the system since maintaining consistent data requires additional work and might need some blocking strategy.

In this paper, they presents some introductions to "eventually consistent" systems. Such systems provides high availablility and low latency while reducing the consistency to be "good enough". The idea of "eventually consistent" is that the updates from the clients are performed on a single node and then return immediately before the changes safely propagate to all the other duplicates. The information exchanging process takes place afterwards, for example, by using a "last writer wins" strategy.

However, the "eventually consistent" techniques are shown to have strong consistency in practice. Some metrics such as time and number of versions can be used to estimate how consistent is the system.
As a result, "eventually consistent" systems are actually very favorable. This is probably another example of the "KISS" idea.


For large-scale Internet services that utilize distributed systems, it is crucial to provide "always-on" functionalities. However, this usually contradicts with the consistency of the system since maintaining consistent data requires additional work and might need some blocking strategy.

In this paper, they presents some introductions to "eventually consistent" systems. Such systems provides high availablility and low latency while reducing the consistency to be "good enough". The idea of "eventually consistent" is that the updates from the clients are performed on a single node and then return immediately before the changes safely propagate to all the other duplicates. The information exchanging process takes place afterwards, for example, by using a "last writer wins" strategy.

However, the "eventually consistent" techniques are shown to have strong consistency in practice. Some metrics such as time and number of versions can be used to estimate how consistent is the system.
As a result, "eventually consistent" systems are actually very favorable. This is probably another example of the "KISS" idea.


For large-scale Internet services that utilize distributed systems, it is crucial to provide "always-on" functionalities. However, this usually contradicts with the consistency of the system since maintaining consistent data requires additional work and might need some blocking strategy.

In this paper, they presents some introductions to "eventually consistent" systems. Such systems provides high availablility and low latency while reducing the consistency to be "good enough". The idea of "eventually consistent" is that the updates from the clients are performed on a single node and then return immediately before the changes safely propagate to all the other duplicates. The information exchanging process takes place afterwards, for example, by using a "last writer wins" strategy.

However, the "eventually consistent" techniques are shown to have strong consistency in practice. Some metrics such as time and number of versions can be used to estimate how consistent is the system.
As a result, "eventually consistent" systems are actually very favorable. This is probably another example of the "KISS" idea.


For large-scale Internet services thaFor large-scale Internet services that utilize distributed systems, it is crucial to provide "always-on" functionalities. However, this usually contradicts with the consistency of the system since maintaining consistent data requires additional work and might need some blocking strategy.

In this paper, they presents some introductions to "eventually consistent" systems. Such systems provides high availablility and low latency while reducing the consistency to be "good enough". The idea of "eventually consistent" is that the updates from the clients are performed on a single node and then return immediately before the changes safely propagate to all the other duplicates. The information exchanging process takes place afterwards, for example, by using a "last writer wins" strategy.

However, the "eventually consistent" techniques are shown to have strong consistency in practice. Some metrics such as time and number of versions can be used to estimate how consistent is the system.
As a result, "eventually consistent" systems are actually very favorable. This is probably another example of the "KISS" idea.


For large-scale Internet services that utilize distributed systems, it is crucial to provide "always-on" functionalities. However, this usually contradicts with the consistency of the system since maintaining consistent data requires additional work and might need some blocking strategy.

In this paper, they presents some introductions to "eventually consistent" systems. Such systems provides high availablility and low latency while reducing the consistency to be "good enough". The idea of "eventually consistent" is that the updates from the clients are performed on a single node and then return immediately before the changes safely propagate to all the other duplicates. The information exchanging process takes place afterwards, for example, by using a "last writer wins" strategy.

However, the "eventually consistent" techniques are shown to have strong consistency in practice. Some metrics such as time and number of versions can be used to estimate how consistent is the system.
As a result, "eventually consistent" systems are actually very favorable. This is probably another example of the "KISS" idea.


For large-scale Internet services that utilize distributed systems, it is crucial to provide "always-on" functionalities. However, this usually contradicts with the consistency of the system since maintaining consistent data requires additional work and might need some blocking strategy.

In this paper, they presents some introductions to "eventually consistent" systems. Such systems provides high availablility and low latency while reducing the consistency to be "good enough". The idea of "eventually consistent" is that the updates from the clients are performed on a single node and then return immediately before the changes safely propagate to all the other duplicates. The information exchanging process takes place afterwards, for example, by using a "last writer wins" strategy.

However, the "eventually consistent" techniques are shown to have strong consistency in practice. Some metrics such as time and number of versions can be used to estimate how consistent is the system.
As a result, "eventually consistent" systems are actually very favorable. This is probably another example of the "KISS" idea.


For large-scale Internet services that utilize distributed systems, it is crucial to provide "always-on" functionalities. However, this usually contradicts with the consistency of the system since maintaining consistent data requires additional work and might need some blocking strategy.

In this paper, they presents some introductions to "eventually consistent" systems. Such systems provides high availablility and low latency while reducing the consistency to be "good enough". The idea of "eventually consistent" is that the updates from the clients are performed on a single node andt utilize distributed systems, it is crucial to provide "always-on" functionalities. However, this usually contradicts with the consistency of the system since maintaining consistent data requires additional work and might need some blocking strategy.

In this paper, they presents some introductions to "eventually consistent" systems. Such systems provides high availablility and low latency while reducing the consistency to be "good enough". The idea of "eventually consistent" is that the updates from the clients are performed on a single node and then return immediately before the changes safely propagate to all the other duplicates. The information exchanging process takes place afterwards, for example, by using a "last writer wins" strategy.

However, the "eventually consistent" techniques are shown to have strong consistency in practice. Some metrics such as time and number of versions can be used to estimate how consistent is the system.
As a result, "eventually consistent" systems are actually very favorable. This is probably another example of the "KISS" idea.


For large-scale Internet services that utilize distributed systems, it is crucial to provide "always-on" functionalities. However, this usually contradicts with the consistency of the system since maintaining consistent data requires additional work and might need some blocking strategy.

In this paper, they presents some introductions to "eventually consistent" systems. Such systems provides high availablility and low latency while reducing the consistency to be "good enough". The idea of "eventually consistent" is that the updates from the clients are performed on a single node and then return immediately before the changes safely propagate to all the other duplicates. The information exchanging process takes place afterwards, for example, by using a "last writer wins" strategy.

However, the "eventually consistent" techniques are shown to have strong consistency in practice. Some metrics such as time and number of versions can be used to estimate how consistent is the system.
As a result, "eventually consistent" systems are actually very favorable. This is probably another example of the "KISS" idea.


For large-scale Internet services that utilize distributed systems, it is crucial to provide "always-on" functionalities. However, this usually contradicts with the consistency of the system since maintaining consistent data requires additional work and might need some blocking strategy.

In this paper, they presents some introductions to "eventually consistent" systems. Such systems provides high availablility and low latency while reducing the consistency to be "good enough". The idea of "eventually consistent" is that the updates from the clients are performed on a single node and then return immediately before the changes safely propagate to all the other duplicates. The information exchanging process takes place afterwards, for example, by using a "last writer wins" strategy.

However, the "eventually consistent" techniques are shown to have strong consistency in practice. Some metrics such as time and number of versions can be used to estimate how consistent is the system.
As a result, "eventually consistent" systems are actually very favorable. This is probably another example of the "KISS" idea.


then return immediately before the changes safely propagate to all the other duplicates. The information exchanging process takes place afterwards, for example, by using a "last writer wins" strategy.

However, the "eventually consistent" techniques are shown to have strong consistency in practice. Some metrics such as time and number of versions can be used to estimate how consistent is the system.
As a result, "eventually consistent" systems are actually very favorable. This is probably another example of the "KISS" idea.


For large-scale Internet services that utilize distributed systems, it is crucial to provide "always-on" functionalities. However, this usually contradicts with the consistency of the system since maintaining consistent data requires additional work and might need some blocking strategy.

In this paper, they presents some introductions to "eventually consistent" systems. Such systems provides high availablility and low latency while reducing the consistency to be "good enough". The idea of "eventually consistent" is that the updates from the clients are performed on a single node and then return immediately before the changes safely propagate to all the other duplicates. The information exchanging process takes place afterwards, for example, by using a "last writer wins" strategy.

However, the "eventually consistent" techniques are shown to have strong consistency in practice. Some metrics such as time and number of versions can be used to estimate how consistent is the system.
As a result, "eventually consistent" systems are actually very favorable. This is probably another example of the "KISS" idea.


For large-scale Internet services that utilize distributed systems, it is crucial to provide "always-on" functionalities. However, this usually contradicts with the consistency of the system since maintaining consistent data requires additional work and might need some blocking strategy.

In this paper, they presents some introductions to "eventually consistent" systems. Such systems provides high availablility and low latency while reducing the consistency to be "good enough". The idea of "eventually consistent" is that the updates from the clients are performed on a single node and then return immediately before the changes safely propagate to all the other duplicates. The information exchanging process takes place afterwards, for example, by using a "last writer wins" strategy.

However, the "eventually consistent" techniques are shown to have strong consistency in practice. Some metrics such as time and number of versions can be used to estimate how consistent is the system.
As a result, "eventually consistent" systems are actually very favorable. This is probably another example of the "KISS" idea.


For large-scale Internet services that utilize distributed systems, it is crucial to provide "always-on" functionalities. However, this usually contradicts with the consistency of the system since maintaining consistent data requires additional work and might need some blocking strategy.

In this paper, they presents some introductions to "eventually consistent" systems. Such systems provides high availablility and low latency while reducing the consistency to be "good enough". The idea of "eventually consistent" is that the updates from the clients are performed on a single node and then return immediately before the changes safely propagate to all the other duplicates. The information exchanging process takes place afterwards, for example, by using a "last writer wins" strategy.

However, the "eventually consistent" techniques are shown to have strong consistency in practice. Some metrics such as time and number of versions can be used to estimate how consistent is the system.
As a result, "eventually consistent" systems are actually very favorable. This is probably another example of the "KISS" idea.


For large-scale Internet services that utilize distributed systems, it is crucial to provide "always-on" functionalities. However, this usually contradicts with the consistency of the system since maintaining consistent data requires additional work and might need some blocking strategy.

In this paper, they presents some introductions to "eventually consistent" systems. Such systems provides high availablility and low latency while reducing the consistency to be "good enough". The idea of "eventually consistent" is that the updates from the clients are performed on a single node and then return immediately before the changes safely propagate to all the other duplicates. The information exchanging process takes place afterwards, for example, by using a "last writer wins" strategy.

However, the "eventually consistent" techniques are shown to have strong consistency in practice. Some metrics such as time and number of versions can be used to estimate how consistent is the system.
As a result, "eventually consistent" systems are actually very favorable. This is probably another example of the "KISS" idea.


For large-scale Internet services that utilize distributed systems, it is crucial to provide "always-on" functionalities. However, this usually contradicts with the consistency of the system since maintaining consistent data requires additional work and might need some blocking strategy.

In this paper, they presents some introductions to "eventually consistent" systems. Such systems provides high availablility and low latency while reducing the consistency to be "good enough". The idea of "eventually consistent" is that the updates from the clients are performed on a single node and then return immediately before the changes safely propagate to all the other duplicates. The information exchanging process takes place afterwards, for example, by using a "last writer wins" strategy.

However, the "eventually consistent" techniques are shown to have strong consistency in practice. Some metrics such as time and number of versions can be used to estimate how consistent is the system.
As a result, "eventually consistent" systems are actually very favorable. This is probably another example of the "KISS" idea.


For large-scale Internet services that utilize distributed systems, it is crucial to provide "always-on" functionalities. However, this usually contradicts with the consistency of the system since maintaining consistent data requires additional work and might need some blocking strategy.

In this paper, they presents some introductions to "eventually consistent" systems. Such systems provides high availablility and low latency while reducing the consistency to be "good enough". The idea of "eventually consistent" is that the updates from the clients are performed on a single node and then return immediately before the changes safely propagate to all the other duplicates. The information exchanging process takes place afterwards, for example, by using a "last writer wins" strategy.

However, the "eventually consistent" techniques are shown to have strong consistency in practice. Some metrics such as time and number of versions can be used to estimate how consistent is the system.
As a result, "eventually consistent" systems are actually very favorable. This is probably another example of the "KISS" idea.


Review 14

Review 14

This paper addresses how eventual consistency can be achieved. This is important because, with the rise of large scale internet services, distributed-system architectures must give up strong consistency (which can cut communication between active servers) in exchange for always-on, highly available operations through network partitions. The paper approaches eventual consistency with discussion on new prediction and measurement techniques that allow users to determine when the “eventually” of eventual consistency is, teaching on how to avoid inconsistencies, and analysis of achieving traditional ACID properties while remaining available.

Anti-entropy, the exchange of information between replicas about which writes they have seen, ensures convergence in replicas. Anti-entropy can be achieved with asynchronous all-to-all broadcast, in which the replica that receives the write request responds to the user, and sends the write to all other replicas in the background. The other replicas then update their locally stored data items. When there are concurrent writes, the winning result is deterministically chosen by the replicas using metrics such as timestamp. Benefits of eventual consistency are that it will simply stall on downed replicas or network partitions, latency is bonded because operations are completed locally, there is no complex code for master election, and anti-entropy can be used as often or as little as wanted. Disadvantages of eventual consistency are that it makes no safety guarantees because it is purely a liveness method; the replicas agree eventually, but no bad behavior is prevented in before the “eventually.” However, even without safety, eventual consistency works well in practice because of its latency and availability advantages.

Even though there is no promise of safety, safety is provided with metrics, such as time and versioning, and mechanisms, such as measurement and prediction. The metrics ensure that the users do not go back in time. Measurement provides runtime monitoring and alerts by determining the consistency of the store given the workload, and prediction provides what-if analysis by determining whether the store will be consistent given the workload. Probabilistic bounded staleness (PBS) is used to quantify an expectation of how recent data reads are. PBS is proportional to the rate of anti-entropy, which determines the degree of inconsistency, and can be calculated with the anti-entropy rate, the network delay, and the local processing delay. PBS is utilized in LinkedIn’s data stores, Yammer’s data stores, and Cassandra’s user metrics.

Some limitations of this paper are that it does not discuss the issues that could arise without the promise of safety. I would’ve liked to seen a specific example of what could go wrong without safety. I would’ve also liked to see a quantitative analysis on how often issues arise due to a lack of safety guarantee.



This paper addresses how eventual consistency can be achieved. This is important because, with the rise of large scale internet services, distributed-system architectures must give up strong consistency (which can cut communication between active servers) in exchange for always-on, highly available operations through network partitions. The paper approaches eventual consistency with discussion on new prediction and measurement techniques that allow users to determine when the “eventually” of eventual consistency is, teaching on how to avoid inconsistencies, and analysis of achieving traditional ACID properties while remaining available.

Anti-entropy, the exchange of information between replicas about which writes they have seen, ensures convergence in replicas. Anti-entropy can be achieved with asynchronous all-to-all broadcast, in which the replica that receives the write request responds to the user, and sends the write to all other replicas in the background. The other replicas then update their locally stored data items. When there are concurrent writes, the winning result is deterministically chosen by the replicas using metrics such as timestamp. Benefits of eventual consistency are that it will simply stall on downed replicas or network partitions, latency is bonded because operations are completed locally, there is no complex code for master election, and anti-entropy can be used as often or as little as wanted. Disadvantages of eventual consistency are that it makes no safety guarantees because it is purely a liveness method; the replicas agree eventually, but no bad behavior is prevented in before the “eventually.” However, even without safety, eventual consistency works well in practice because of its latency and availability advantages.

Even though there is no promise of safety, safety is provided with metrics, such as time and versioning, and mechanisms, such as measurement and prediction. The metrics ensure that the users do not go back in time. Measurement provides runtime monitoring and alerts by determining the consistency of the store given the workload, and prediction provides what-if analysis by determining whether the store will be consistent given the workload. Probabilistic bounded staleness (PBS) is used to quantify an expectation of how recent data reads are. PBS is proportional to the rate of anti-entropy, which determines the degree of inconsistency, and can be calculated with the anti-entropy rate, the network delay, and the local processing delay. PBS is utilized in LinkedIn’s data stores, Yammer’s data stores, and Cassandra’s user metrics.

Some limitations of this paper are that it does not discuss the issues that could arise without the promise of safety. I would’ve liked to seen a specific example of what could go wrong without safety. I would’ve also liked to see a quantitative analysis on how often issues arise due to a lack of safety guarantee.



Review 15

Review 15

This paper was about the trade-off of availability and performance for guarantees about the consistency of the data. The paper discussed the model of eventual consistency, which states that “if no additional updates are made to a given data item, all reads to that item will eventually return the same value”. The paper describes the need for ways to deal with consistency issues by defining the CAP theorem which tells us that you cannot ensure the availability of a system and it’s consistency in the presence of partial failures. The paper then goes on to describe the implementation of eventual consistency, the properties of safety and liveliness, metrics for evaluation, the model of probabilistically bounded staleness, the limitations of these models and their potential future.

The strengths of this paper lie mostly in the beginning of the paper. It introduces a topic which I had no experience with and motivates the issues that led to its design. The authors describe how one would implement eventual consistency and its benefits of not requiring difficult code for “corner cases”, as well as bounding and decreasing latency. This paper provided insight into the thoughts of others in the field as well as some examples from LinkedIn and Yammer. The difference in latencies between these two companies was insightful. Not as insightful as it could have been if we had more data on this, but this is not something we can fault the authors for, as it would be very difficult to collect. After discussing the pros and cons the authors discuss the fact that it really depends on the application as to whether or not eventual consistency is important and provides an alternative called compensation. The optimization of cost and rate of inconsistencies weighed against the benefit of having weak consistency is a clear and helpful description of the analysis one would have to do to decide what one’s database needs to support.

The drawbacks of the paper come mostly from a later section on “Compensation by Design”. This section tries to motivate commutative replicated data types (CRDTS) as another alternate solution for some database systems. They describe the use of an increment operation and how it would not have the same problems as read/write operations. After seeing the banking example in the previous section the reader is left to wonder to what extent these are actually useful. Where is the line between what commutative operations can give us versus what types of systems need consistency or compensation? The authors then try to motivate the Bloom language, but they don’t motivate it well. They say that it encourages “order-insensitive disorderly programming”. This doesn’t sound like it would ever be a good thing. Why should we have to learn a new language to handle consistency? Why shouldn’t our existing systems just provide support for the options the author already discussed?

After reading the paper I agree with the authors that eventual consistency will likely have more admirers and advocates in the future. The authors discuss performing simulations and calculating expected consistency. I would be interested to see simulations that vary network latency, number of nodes, and frequency of node failure, which the authors suggest are responsible for changes in availability and performance. I would expect that each of these three variables have an indirect correlations but their exact relationship is not described.



This paper was about the trade-off of availability and performance for guarantees about the consistency of the data. The paper discussed the model of eventual consistency, which states that “if no additional updates are made to a given data item, all reads to that item will eventually return the same value”. The paper describes the need for ways to deal with consistency issues by defining the CAP theorem which tells us that you cannot ensure the availability of a system and it’s consistency in the presence of partial failures. The paper then goes on to describe the implementation of eventual consistency, the properties of safety and liveliness, metrics for evaluation, the model of probabilistically bounded staleness, the limitations of these models and their potential future.

The strengths of this paper lie mostly in the beginning of the paper. It introduces a topic which I had no experience with and motivates the issues that led to its design. The authors describe how one would implement eventual consistency and its benefits of not requiring difficult code for “corner cases”, as well as bounding and decreasing latency. This paper provided insight into the thoughts of others in the field as well as some examples from LinkedIn and Yammer. The difference in latencies between these two companies was insightful. Not as insightful as it could have been if we had more data on this, but this is not something we can fault the authors for, as it would be very difficult to collect. After discussing the pros and cons the authors discuss the fact that it really depends on the application as to whether or not eventual consistency is important and provides an alternative called compensation. The optimization of cost and rate of inconsistencies weighed against the benefit of having weak consistency is a clear and helpful description of the analysis one would have to do to decide what one’s database needs to support.

The drawbacks of the paper come mostly from a later section on “Compensation by Design”. This section tries to motivate commutative replicated data types (CRDTS) as another alternate solution for some database systems. They describe the use of an increment operation and how it would not have the same problems as read/write operations. After seeing the banking example in the previous section the reader is left to wonder to what extent these are actually useful. Where is the line between what commutative operations can give us versus what types of systems need consistency or compensation? The authors then try to motivate the Bloom language, but they don’t motivate it well. They say that it encourages “order-insensitive disorderly programming”. This doesn’t sound like it would ever be a good thing. Why should we have to learn a new language to handle consistency? Why shouldn’t our existing systems just provide support for the options the author already discussed?

After reading the paper I agree with the authors that eventual consistency will likely have more admirers and advocates in the future. The authors discuss performing simulations and calculating expected consistency. I would be interested to see simulations that vary network latency, number of nodes, and frequency of node failure, which the authors suggest are responsible for changes in availability and performance. I would expect that each of these three variables have an indirect correlations but their exact relationship is not described.



Review 16

Review 16

Part 1: Overview

This paper introduces a weaker consistency constraint, eventual consistency, which is suitable for large scale distributed systems where replica are spreaded all across the network. This paper brings preliminary results to three questions, how eventual, how to program inver eventual consistency, is it possible to reach for higher consistency constraints without losing the benefits? Eventual consistency comes up when distributed system thrives. There is no way to simultaneously maintain always-on experience or availability to users and therefore to ensure all readers can fetch the latest write version of data. Therefore, database designers applies a weaker consistency constraint, where all the servers will eventually converge to the same state. Noted that there is no clearly defined time window for the system to converge. Implementing eventual consistency is straightforward, the anti-entropy process would be a good guide for implementing eventual consistent systems.

A safety property guarantees that “nothing bad happens” while the liveness property guarantees that “something good will eventually happen”. However, eventual consistency does not provide safety property. To check how eventual would the system converge, we can use criteria like time, probabilistically bounded staleness (PBS). Implementing PBS could also follow the anti-entropy guideline. Programming under eventual consistency could bring us compensation, cost, and eventually benefits. Some recent research provides compensation-free programming for eventual consistency systems however it is painful to program under eventual consistency. Possible cutting edge researches are mentioned which try to reach for higher consistency constraint while not sacrificing the benefits we get under eventual consistency.

Part 2: Contributions

This paper presents a valuable problem-finding research. By following that people can do more deep discussion or mechanism design. This paper provides new directions for distributed database systems and lead the trend of weaker consistency property compared to the traditional hard ACID properties.

Part 3: Possible Drawbacks

This paper is not a formal survey of the literature nor a pure academic research paper. This paper mainly focuses on the new areas of hot questions brought by the weaker consistent constraint. There are still unspecified problems or definitions that are not clearly stated. For example the time window of the convergence of the distributed system is remained unclear. In the future they should be figured out and implemented in the industry.



Review 17

Part 1: Overview

This paper introduces a weaker consistency constraint, eventual consistency, which is suitable for large scale distributed systems where replica are spreaded all across the network. This paper brings preliminary results to three questions, how eventual, how to program inver eventual consistency, is it possible to reach for higher consistency constraints without losing the benefits? Eventual consistency comes up when distributed system thrives. There is no way to simultaneously maintain always-on experience or availability to users and therefore to ensure all readers can fetch the latest write version of data. Therefore, database designers applies a weaker consistency constraint, where all the servers will eventually converge to the same state. Noted that there is no clearly defined time window for the system to converge. Implementing eventual consistency is straightforward, the anti-entropy process would be a good guide for implementing eventual consistent systems.

A safety property guarantees that “nothing bad happens” while the liveness property guarantees that “something good will eventually happen”. However, eventual consistency does not provide safety property. To check how eventual would the system converge, we can use criteria like time, probabilistically bounded staleness (PBS). Implementing PBS could also follow the anti-entropy guideline. Programming under eventual consistency could bring us compensation, cost, and eventually benefits. Some recent research provides compensation-free programming for eventual consistency systems however it is painful to program under eventual consistency. Possible cutting edge researches are mentioned which try to reach for higher consistency constraint while not sacrificing the benefits we get under eventual consistency.

Part 2: Contributions

This paper presents a valuable problem-finding research. By following that people can do more deep discussion or mechanism design. This paper provides new directions for distributed database systems and lead the trend of weaker consistency property compared to the traditional hard ACID properties.

Part 3: Possible Drawbacks

This paper is not a formal survey of the literature nor a pure academic research paper. This paper mainly focuses on the new areas of hot questions brought by the weaker consistent constraint. There are still unspecified problems or definitions that are not clearly stated. For example the time window of the convergence of the distributed system is remained unclear. In the future they should be figured out and implemented in the industry.



Review 17

The paper provides a pragmatic overview about how and why eventual consistent systems are evolved, programmed, and their future directions. The main motivation of eventual consistency is the CAP theorem. According to CAP theorem, it is impossible to simultaneously guarantees consistency, availability and fault tolerance. Consequently, eventual consistency emerged to provide high availability and performance at the cost of a weaker consistency. Eventual consistency involves replica nodes doing a process called anti entropy which involves exchanging of information with one another about which update they have seen. Then replica nodes deterministically choose a “winning” value, often using a simple rule such as “last writer” wins.

The paper discussed about why eventual consistency become widely deployed even if it doesn’t guarantee safety. The main argument is that even though eventual consistency doesn’t provide safety, in practice, many applications experienced a stronger consistency while executing in such system. In addition, by using different prediction and measurement techniques, it is possible to quantify the behaviour of different applications. When verified via such techniques, these systems appear strongly consistent most of the time. For example, Probabilistically Bounded Staleness (PBS) is one technique used to quantify consistency by providing an expectation of how recent reads of data items is with respect to store’s. This intern allows to measure how far an eventually consistent store’s behaviour deviates from that of a strongly consistent store.

In addition, the paper discuss about how to write programs under eventual consistency. It mentioned that, is difficult to reason about system with no guarantee of consistency. The authors discussed two approaches about how to write such programs. The first approach called compensation allows to correct mistakes retroactively, although, it doesn’t guarantee that mistake will be made. The second approach is that as long as a program is limited in using only those data structures that avoid inconsistencies altogether, then there will never be any consistency violations. In addition, the paper indicated that stronger guarantees can be achieved while maintaining availability, using techniques like causality and transaction properties from traditional database systems.

The main strength of the paper is that it is focused in practical application of eventual consistency. Most discussion are based on pragmatic experiences rather than theoretical concepts which often fails to highlight requirement by the real world. I like the author's explanation that eventual consistency doesn’t always mean that programs will experience inconsistency.

Since most of the concepts discussed are based on observation in the real world, their argument might be skewed to the particular applications they considered. It could have been better if the authors include more discussion from different research works as well. In addition, some part of their discussions is kept high level. For example, I appreciate if there were a detailed discussion regarding with how transactional algorithms provide a stronger consistency than eventual consistency. Furthermore, the paper is not well organized regarding with numbering sections. Sometimes it is not easy to relate subsections with their corresponding main sections.



Review 18

The paper provides a pragmatic overview about how and why eventual consistent systems are evolved, programmed, and their future directions. The main motivation of eventual consistency is the CAP theorem. According to CAP theorem, it is impossible to simultaneously guarantees consistency, availability and fault tolerance. Consequently, eventual consistency emerged to provide high availability and performance at the cost of a weaker consistency. Eventual consistency involves replica nodes doing a process called anti entropy which involves exchanging of information with one another about which update they have seen. Then replica nodes deterministically choose a “winning” value, often using a simple rule such as “last writer” wins.

The paper discussed about why eventual consistency become widely deployed even if it doesn’t guarantee safety. The main argument is that even though eventual consistency doesn’t provide safety, in practice, many applications experienced a stronger consistency while executing in such system. In addition, by using different prediction and measurement techniques, it is possible to quantify the behaviour of different applications. When verified via such techniques, these systems appear strongly consistent most of the time. For example, Probabilistically Bounded Staleness (PBS) is one technique used to quantify consistency by providing an expectation of how recent reads of data items is with respect to store’s. This intern allows to measure how far an eventually consistent store’s behaviour deviates from that of a strongly consistent store.

In addition, the paper discuss about how to write programs under eventual consistency. It mentioned that, is difficult to reason about system with no guarantee of consistency. The authors discussed two approaches about how to write such programs. The first approach called compensation allows to correct mistakes retroactively, although, it doesn’t guarantee that mistake will be made. The second approach is that as long as a program is limited in using only those data structures that avoid inconsistencies altogether, then there will never be any consistency violations. In addition, the paper indicated that stronger guarantees can be achieved while maintaining availability, using techniques like causality and transaction properties from traditional database systems.

The main strength of the paper is that it is focused in practical application of eventual consistency. Most discussion are based on pragmatic experiences rather than theoretical concepts which often fails to highlight requirement by the real world. I like the author's explanation that eventual consistency doesn’t always mean that programs will experience inconsistency.

Since most of the concepts discussed are based on observation in the real world, their argument might be skewed to the particular applications they considered. It could have been better if the authors include more discussion from different research works as well. In addition, some part of their discussions is kept high level. For example, I appreciate if there were a detailed discussion regarding with how transactional algorithms provide a stronger consistency than eventual consistency. Furthermore, the paper is not well organized regarding with numbering sections. Sometimes it is not easy to relate subsections with their corresponding main sections.



Review 18

With the continued rise of large-scale internet services, weaker models are increasingly favored, and being eventual consistency is the most notable one. In this article, eventual consistency is explained in terms of why we need it and how it is programed. 3 key questions is discussed in this article.
1. how eventual is eventual consistency?
2. how should one program under eventual consistency?
3. is it possible to provide stronger guarantees than eventual consistency without losing its benefits?

The foundation of eventual consistency is based on Brewer's CAP theorem indicates that it is impossible to achieve "always on" (high availability) and "always read the latest write" (consistency) at the same time. Maintaining SSI (single-system image) has a cost and reduce availability. To make sure the newly written value get synchronized across the entire system is expensive. Therefore, in the situations where low latency are more emphasized, we must relax the expectations of data consistency.

Eventual consistency has the following benefits:
1. easy to implement
2. solve the problem for large scale services

What is the eventual state of database? What intermediate value should be read before system wide consistency? Those questions can be asked considering safety and liveness properties of distributed database. The question about how fast the system converges to its consistency state shall be considered, which is known as "window of consistency". The article introduces two quantify metrics, measurement and prediction. For probability bounded staleness, a typical statement would be "100 milliseconds after a write. 99.9 percent of reads will return the most recent version or within two of the most recent". Although eventual consistency is often strong consistent, compensation is required to handle inconsistent transactions.

===Strength===
This paper provides a great overview of key questions upon developing an eventual consistency model. It provides great examples in real world and detailed explanations.

===Weakness===
The article also talked about how we might push the boundary of eventual consistency models for a little bit at the end. By the article failed provide more details on where the eventual consistency model is heading. More discussion on that is desired.



Review 19

With the continued rise of large-scale internet services, weaker models are increasingly favored, and being eventual consistency is the most notable one. In this article, eventual consistency is explained in terms of why we need it and how it is programed. 3 key questions is discussed in this article.
1. how eventual is eventual consistency?
2. how should one program under eventual consistency?
3. is it possible to provide stronger guarantees than eventual consistency without losing its benefits?

The foundation of eventual consistency is based on Brewer's CAP theorem indicates that it is impossible to achieve "always on" (high availability) and "always read the latest write" (consistency) at the same time. Maintaining SSI (single-system image) has a cost and reduce availability. To make sure the newly written value get synchronized across the entire system is expensive. Therefore, in the situations where low latency are more emphasized, we must relax the expectations of data consistency.

Eventual consistency has the following benefits:
1. easy to implement
2. solve the problem for large scale services

What is the eventual state of database? What intermediate value should be read before system wide consistency? Those questions can be asked considering safety and liveness properties of distributed database. The question about how fast the system converges to its consistency state shall be considered, which is known as "window of consistency". The article introduces two quantify metrics, measurement and prediction. For probability bounded staleness, a typical statement would be "100 milliseconds after a write. 99.9 percent of reads will return the most recent version or within two of the most recent". Although eventual consistency is often strong consistent, compensation is required to handle inconsistent transactions.

===Strength===
This paper provides a great overview of key questions upon developing an eventual consistency model. It provides great examples in real world and detailed explanations.

===Weakness===
The article also talked about how we might push the boundary of eventual consistency models for a little bit at the end. By the article failed provide more details on where the eventual consistency model is heading. More discussion on that is desired.



Review 19

This paper talks about what eventual consistency is and some important theory such as CAP, CLAM and ACID 2.0. Then it talks about the boundary of eventual consistency.

This paper starts with the introduction of eventual consistency by Eric Brewer. Then eventual consistent systems are described.
After that main part of this paper are devoted into the discussion of three problem: 1) How Eventual is eventual consistency
2) How programming will change when dealing with an eventual consistency
3) Stronger guarantees than eventual consistency without losing its benefits.
For 1) Eventual consistent systems usually use some SLOs to define what eventual is. For example: “100 milliseconds after a write completes, 99.9 percent of reads will return most recent version.”
For 2) Some important theorems are proposed, such as CALM to guide the development of program. And for some program, inconsistency is ignored or handled using correct business logic.
For 3) They found that it is possible to provide causality consistency, but this is the upper bound.


Contribution:
This paper summarized the development of the concept and implementation of eventual consistent system and theorems. And informed people of the lasted progress of its limit.

What I like most of this paper is that it talked about bank transaction system. What we usually think about bank transaction is that it can never go wrong, or there will be a big trouble. But this paper points out that if the benefit of performance improvement is higher than compensation we need to pay when some part is wrong, bank should actually adopt the eventual consistent system.

I find it’s hard to criticize this paper because its purpose is to give an overview of eventual consistency. I think this paper is easy to read and has covered most important aspects.



Review 20

This paper talks about what eventual consistency is and some important theory such as CAP, CLAM and ACID 2.0. Then it talks about the boundary of eventual consistency.

This paper starts with the introduction of eventual consistency by Eric Brewer. Then eventual consistent systems are described.
After that main part of this paper are devoted into the discussion of three problem: 1) How Eventual is eventual consistency
2) How programming will change when dealing with an eventual consistency
3) Stronger guarantees than eventual consistency without losing its benefits.
For 1) Eventual consistent systems usually use some SLOs to define what eventual is. For example: “100 milliseconds after a write completes, 99.9 percent of reads will return most recent version.”
For 2) Some important theorems are proposed, such as CALM to guide the development of program. And for some program, inconsistency is ignored or handled using correct business logic.
For 3) They found that it is possible to provide causality consistency, but this is the upper bound.


Contribution:
This paper summarized the development of the concept and implementation of eventual consistent system and theorems. And informed people of the lasted progress of its limit.

What I like most of this paper is that it talked about bank transaction system. What we usually think about bank transaction is that it can never go wrong, or there will be a big trouble. But this paper points out that if the benefit of performance improvement is higher than compensation we need to pay when some part is wrong, bank should actually adopt the eventual consistent system.

I find it’s hard to criticize this paper because its purpose is to give an overview of eventual consistency. I think this paper is easy to read and has covered most important aspects.



Review 20

This paper presents the idea of using eventually consistent infrastructure given that there is no guarantee of safety.

Safety is the principle that all values that are read will be written whereas liveness specifies that all inconsistent values will eventually converge to a value, though which value is not necessarily known. These kind of databases would be advantageous where the availability of an application is more important than the overall consistency at a given point such as a social networking websites.

Two of the major concepts this paper seems to cover is CALM and ACID 2.0 in CRDTs (Commutative replicated data types). CALM (Consistency as logical monotonicity) is for programs that are monotonic, that is they do not retract any facts and keep computing new facts. CALM helps programmers identify programs like these that would be safe to implement eventual consistency on. Specifically for distributed systems, the authors introduce the concept of ACID 2.0 where ACID is associative, commutative, idempotent and distributive is a placeholder for D. These three properties when used significantly, or while using data structures that embody these properties might result in a safe system to implement eventual consistency.

Eventual consistency might be a good option in case of normal ACID databases that provide lower isolation levels such as read committed where at least by using eventual consistency, some of the overhead is reduced of building relational ACID databases. One of their major contributions is the idea that weaker consistency models can be implemented in a distributed environment with high availability, a good example being Amazon’s Dynamo.

One of the examples I was not very convinced about was about the banking software relying on the socio-technical system to deal with inconsistencies. I think banking systems, especially in the example of an ATM deal with an idea of 100% consistency so eventual consistency may not be the right model for such critical systems. Considering that eventual consistency does provide a consistency within 10-100 milliseconds in comparison to its strongly consistent counterparts, it would have been great if they could have introduced some ideas on how reducing the inconsistency window would have helped implement Eventual consistency for more critical applications. Overall, this paper definitely brings eventual consistency to the fore and describes various facets of the same.



Review 21

This paper presents the idea of using eventually consistent infrastructure given that there is no guarantee of safety.

Safety is the principle that all values that are read will be written whereas liveness specifies that all inconsistent values will eventually converge to a value, though which value is not necessarily known. These kind of databases would be advantageous where the availability of an application is more important than the overall consistency at a given point such as a social networking websites.

Two of the major concepts this paper seems to cover is CALM and ACID 2.0 in CRDTs (Commutative replicated data types). CALM (Consistency as logical monotonicity) is for programs that are monotonic, that is they do not retract any facts and keep computing new facts. CALM helps programmers identify programs like these that would be safe to implement eventual consistency on. Specifically for distributed systems, the authors introduce the concept of ACID 2.0 where ACID is associative, commutative, idempotent and distributive is a placeholder for D. These three properties when used significantly, or while using data structures that embody these properties might result in a safe system to implement eventual consistency.

Eventual consistency might be a good option in case of normal ACID databases that provide lower isolation levels such as read committed where at least by using eventual consistency, some of the overhead is reduced of building relational ACID databases. One of their major contributions is the idea that weaker consistency models can be implemented in a distributed environment with high availability, a good example being Amazon’s Dynamo.

One of the examples I was not very convinced about was about the banking software relying on the socio-technical system to deal with inconsistencies. I think banking systems, especially in the example of an ATM deal with an idea of 100% consistency so eventual consistency may not be the right model for such critical systems. Considering that eventual consistency does provide a consistency within 10-100 milliseconds in comparison to its strongly consistent counterparts, it would have been great if they could have introduced some ideas on how reducing the inconsistency window would have helped implement Eventual consistency for more critical applications. Overall, this paper definitely brings eventual consistency to the fore and describes various facets of the same.



Review 21

Eventual Consistency Today: Limitations, Extensions, and Beyond

This paper mainly talks about the concept of the eventual consistency model, some real case performance, extension and limits of it. It serves well as an introduction to this concept and gave insightful thoughts on its development.

The author started by the CAP theorem of Eric Brewer---Distributed systems requiring always-on, highly available operation cannot guarantee the illusion of coherent, consistent single-system operation in the presence of network partitions, which cut communication between active servers. Based on Eric’s idea, eventual consistency shows a trade off between the high availability and strict consistency. Eventual consistency provides very few guarantees, the only widely admitted one is probably just that, if no additional updates are made on a data item, all reads to that item will eventually return the same value.

Eventual consistency is achieved on distributed file systems which keeps a number of replicas of data records. With the low latency requirements, it allows multiple versions of data to exist in the system for a short period of time and uses anti-entropy method to ensure that all versions eventually converge. An naive approach for anti-entropy mentioned in article is to allow the user to write to any replica and use an asynchronous all-to-all broadcast method to send writes to all replicas in the background which uses the “last writer wins” rule for concurrent writes. In this way, all write ops are being treated as local operation to ensure low latency.

But eventual consistency doesn’t provide any safety guarantees, the favored metrics to capture the “window of consistency” is the probabilistics bounded staleness(PBS), which has the form of”100 milliseconds after a write completes, 99.9 percent of reads will return the most recent version”. In reality, linkedin data stores returned 99.9 percent of time within 13.6 ms and 1.63ms on SSDs, which is 16.5 percent and 59.5 percent faster than strongly consistent RDBMS. Hence, under the PBS metric, eventual consistency can often be viewed as strong consistency.

Eventual consistency is not fault free, the current trend of solving that is compensation. However, programmers always want to maximize B-CR, and sometimes designer would just forgo small inconsistency by forgo entirely. By separate data store and application-level consistency concerns, CRDTs achieve better consistency standard CALM and ACID 2.0. But they ask further limitations and the manipulation methods of data. By far, the strongest method that can improve consistency and keep low latency is causal consistency.

To sum up, there are also some weakness in this paper:

1.eventual consistency model doesn’t provide any kind of range query methods, hence it can not support certain types of query, the variety of query language in eventual consistent DBMS would be limited.

2.In this paper, there no measurements or tests conducted in real world about the failure rate of merging different versions of replicas, although causality analysis can solve part of the conflicts, there can still be many other cases that conflicts can’t be solved. And the company who would use eventual consistent DBMS would certainly care about the rate that they have to make compensation for such failures

3.This paper also fail to provide details on what kind test data it uses to prove that the 99.9 percentile latency of linkedin’s eventual consistent DBMS outperforms its competitors. In real world, there so many types of queries can be performed on DBMS, if the info about test benchmark is revealed to reader, it can better help them understand the strength of eventual consistency

Of course, there also some strengths in this paper:

1.It well discussed the PBS as a good measurement for the performance of eventual consistent DBMS.

2.This paper provided very detailed reasoning about why it is not possible to achieve both high availability and consistency, by using the CAP theorem

3.In the last paragraph, the author analyzed the limitation of eventual consistent DBMS in a honest way, and this helps readers to conduct better judgement about when is the most proper situation to favor eventual consistency.



Eventual Consistency Today: Limitations, Extensions, and Beyond

This paper mainly talks about the concept of the eventual consistency model, some real case performance, extension and limits of it. It serves well as an introduction to this concept and gave insightful thoughts on its development.

The author started by the CAP theorem of Eric Brewer---Distributed systems requiring always-on, highly available operation cannot guarantee the illusion of coherent, consistent single-system operation in the presence of network partitions, which cut communication between active servers. Based on Eric’s idea, eventual consistency shows a trade off between the high availability and strict consistency. Eventual consistency provides very few guarantees, the only widely admitted one is probably just that, if no additional updates are made on a data item, all reads to that item will eventually return the same value.

Eventual consistency is achieved on distributed file systems which keeps a number of replicas of data records. With the low latency requirements, it allows multiple versions of data to exist in the system for a short period of time and uses anti-entropy method to ensure that all versions eventually converge. An naive approach for anti-entropy mentioned in article is to allow the user to write to any replica and use an asynchronous all-to-all broadcast method to send writes to all replicas in the background which uses the “last writer wins” rule for concurrent writes. In this way, all write ops are being treated as local operation to ensure low latency.

But eventual consistency doesn’t provide any safety guarantees, the favored metrics to capture the “window of consistency” is the probabilistics bounded staleness(PBS), which has the form of”100 milliseconds after a write completes, 99.9 percent of reads will return the most recent version”. In reality, linkedin data stores returned 99.9 percent of time within 13.6 ms and 1.63ms on SSDs, which is 16.5 percent and 59.5 percent faster than strongly consistent RDBMS. Hence, under the PBS metric, eventual consistency can often be viewed as strong consistency.

Eventual consistency is not fault free, the current trend of solving that is compensation. However, programmers always want to maximize B-CR, and sometimes designer would just forgo small inconsistency by forgo entirely. By separate data store and application-level consistency concerns, CRDTs achieve better consistency standard CALM and ACID 2.0. But they ask further limitations and the manipulation methods of data. By far, the strongest method that can improve consistency and keep low latency is causal consistency.

To sum up, there are also some weakness in this paper:

1.eventual consistency model doesn’t provide any kind of range query methods, hence it can not support certain types of query, the variety of query language in eventual consistent DBMS would be limited.

2.In this paper, there no measurements or tests conducted in real world about the failure rate of merging different versions of replicas, although causality analysis can solve part of the conflicts, there can still be many other cases that conflicts can’t be solved. And the company who would use eventual consistent DBMS would certainly care about the rate that they have to make compensation for such failures

3.This paper also fail to provide details on what kind test data it uses to prove that the 99.9 percentile latency of linkedin’s eventual consistent DBMS outperforms its competitors. In real world, there so many types of queries can be performed on DBMS, if the info about test benchmark is revealed to reader, it can better help them understand the strength of eventual consistency

Of course, there also some strengths in this paper:

1.It well discussed the PBS as a good measurement for the performance of eventual consistent DBMS.

2.This paper provided very detailed reasoning about why it is not possible to achieve both high availability and consistency, by using the CAP theorem

3.In the last paragraph, the author analyzed the limitation of eventual consistent DBMS in a honest way, and this helps readers to conduct better judgement about when is the most proper situation to favor eventual consistency.



Review 22

Review 22

Eventual consistency is a consistency model used in distributed computing to achieve high availability that informally guarantees that, if no new updates are made to a given data item, eventually all reads to that item will return the same value. This is achieved by “converge” at some later point before reading. This paper focus on the implementation and consideration of eventually consistent in real world and the future focus of eventually consistency.
Eventual consistency can provide better availability and better performance but at cost of consistency. All write will apply to node in distributed system, and all servers will eventually converge to the same state. But the same state cannot be guarantee to be same the the state resulted by SSI.
Implementing eventual consistency is very straight forward, it can use the anti-entropy to achieve conciliation. There is also a consideration to detect W which is how many server should succeed before make a valid write. Higher W will provide better durability, but worse availability.
One important factor of eventual consistency is safety that is “nothing bad happens”. The difficulty with eventual consistency is that it makes no safety guarantees. But by measuring and predicting using some kind of technique can quantify eventual consistency. From the experiment result we can conclude that eventual consistency is often strongly consistent.
In term of implementing consistency, compensate for incorrect actions will ensure that mistakes will be eventually corrected. The paper introduce the how the compensate is designed and introduce CALM to detect which program can be safe under eventual consistency.
In the end , the author talks about weak consistency and currently research about eventually consistent data type for reasoning about ht disorder in distribute systems.

Strength:
This paper clear introduce the eventual consistency which can provide high performance, high availability and low latency at cost of consistency. It introduce the problems that eventual consistency is facing and talks about some implementation in real world about how to deal with these problems. This paper is very easy to understand because for each concept or problem in the paper , there will be assigned with a specific example.

Weakness:
For the eventually consistency, there are three ways to deal with conflict that is conciliation.
The paper seem talks about the problems under the assumption of using “Asynchronous repair”. It is necessary to talk about that because the write repair will not meet some of such problems and the problems with read repair is a little different.


Eventual consistency is a consistency model used in distributed computing to achieve high availability that informally guarantees that, if no new updates are made to a given data item, eventually all reads to that item will return the same value. This is achieved by “converge” at some later point before reading. This paper focus on the implementation and consideration of eventually consistent in real world and the future focus of eventually consistency.
Eventual consistency can provide better availability and better performance but at cost of consistency. All write will apply to node in distributed system, and all servers will eventually converge to the same state. But the same state cannot be guarantee to be same the the state resulted by SSI.
Implementing eventual consistency is very straight forward, it can use the anti-entropy to achieve conciliation. There is also a consideration to detect W which is how many server should succeed before make a valid write. Higher W will provide better durability, but worse availability.
One important factor of eventual consistency is safety that is “nothing bad happens”. The difficulty with eventual consistency is that it makes no safety guarantees. But by measuring and predicting using some kind of technique can quantify eventual consistency. From the experiment result we can conclude that eventual consistency is often strongly consistent.
In term of implementing consistency, compensate for incorrect actions will ensure that mistakes will be eventually corrected. The paper introduce the how the compensate is designed and introduce CALM to detect which program can be safe under eventual consistency.
In the end , the author talks about weak consistency and currently research about eventually consistent data type for reasoning about ht disorder in distribute systems.

Strength:
This paper clear introduce the eventual consistency which can provide high performance, high availability and low latency at cost of consistency. It introduce the problems that eventual consistency is facing and talks about some implementation in real world about how to deal with these problems. This paper is very easy to understand because for each concept or problem in the paper , there will be assigned with a specific example.

Weakness:
For the eventually consistency, there are three ways to deal with conflict that is conciliation.
The paper seem talks about the problems under the assumption of using “Asynchronous repair”. It is necessary to talk about that because the write repair will not meet some of such problems and the problems with read repair is a little different.


Review 23

Review 23

The paper introduces a number of concepts: linearizability, eventual consistency, the CAP theorem, consistent data structures, and causal consistency. Throughout this paper, consistency refers to a property of individual entries in the database. Linearizability is the property that a read on an entry will return the most recent write. By contrast, eventual consistency is the property that eventually, all replicas of an entry will agree on the value for that entry. This is a very weak guarantee and the paper illustrates with a few examples. The CAP theorem asserts that a distributed system can not be fully consistent and available in the face of network partitions. This has important implications for designers of distributed databases as it forces them to make a tradeoff. Many systems opt for high availability over consistency. Such systems often provide eventual consistency. These can be used safely using consistent data structures, or errors can be dealt with using some sort of compensation (or fine, whatever is appropriate). Causal consistency is the strongest form of consistency that can be guaranteed in high availability systems, and it guarantees that writes (from the same user) occur in order and that writes follow reads. The paper also ran some experiments to estimate the 99.9% inconsistency window for various eventually consistent databases and found that often the window is very small and the performance benefits over strongly consistent databases can large.

This paper presented the material in a logical order, used simple examples (Google Docs, the one to clarify causal consistency), and generally was easy to read. One critique I had of the paper was that there were few details about the experiments run.


The paper introduces a number of concepts: linearizability, eventual consistency, the CAP theorem, consistent data structures, and causal consistency. Throughout this paper, consistency refers to a property of individual entries in the database. Linearizability is the property that a read on an entry will return the most recent write. By contrast, eventual consistency is the property that eventually, all replicas of an entry will agree on the value for that entry. This is a very weak guarantee and the paper illustrates with a few examples. The CAP theorem asserts that a distributed system can not be fully consistent and available in the face of network partitions. This has important implications for designers of distributed databases as it forces them to make a tradeoff. Many systems opt for high availability over consistency. Such systems often provide eventual consistency. These can be used safely using consistent data structures, or errors can be dealt with using some sort of compensation (or fine, whatever is appropriate). Causal consistency is the strongest form of consistency that can be guaranteed in high availability systems, and it guarantees that writes (from the same user) occur in order and that writes follow reads. The paper also ran some experiments to estimate the 99.9% inconsistency window for various eventually consistent databases and found that often the window is very small and the performance benefits over strongly consistent databases can large.

This paper presented the material in a logical order, used simple examples (Google Docs, the one to clarify causal consistency), and generally was easy to read. One critique I had of the paper was that there were few details about the experiments run.


Review 24

Review 24

This paper provides an overview of the pros and cons of eventual consistency and how developers can create applications on top of eventual consistency when immediate consistency is not guaranteed. In all database systems, the administrator will have to make a tradeoff between availability and consistency. Even with these tradeoffs, more and more databases have started to use weaker forms of consistencies. The authors do a good job of investigating why this phenomenon is occurring and what we can learn from these eventually consistent databases.

They do so by understanding what it means to have eventual consistency and how to program with this constraint. The definition of being eventually consistent is that when there are no writes to the database, it will eventually converge so that all copies have the same data. The authors state that one easy way to do this is to write one copy and slowly copy that write over to all replicas. When there is a conflict, the write that occurred latest will be the write that the database eventually uses.

However, how long would it take for the database to become consistent? The authors show that in the real world, databases can converge very quickly. LinkedIn’s data servers will return consistent data 99.9% of the time within 13.6ms on disk and 1.63ms on solid state drives. Studies also show that Cassandra will converge to a consistent database within 200ms. When programming on these systems, the authors argue that sometimes, the inconsistencies do not matter in the short term. For example, it does not really matter if a comment in a social media site does not propagate to all servers immediately. Finally, it has been shown from a technical report at UT Austin that to not sacrifice availability, the best form of consistency is casual consistency.

This paper does a great job of explaining eventual consistency and why most workloads do not need to worry about immediate consistency. They provide concrete evidence as to why eventual consistency gives more pros in availability and performance than cons in semantic guarantees. But, I still have concerns about the following point:

1. The authors mention that even banking software can use eventual consistency in practice. I think that will major transactions, the inconsistency of not knowing the exact amount in a bank account is enough for the bank to not want eventual consistency. What happens if because of some immediate transaction, the bank loses millions of dollars?



Review 25

This paper provides an overview of the pros and cons of eventual consistency and how developers can create applications on top of eventual consistency when immediate consistency is not guaranteed. In all database systems, the administrator will have to make a tradeoff between availability and consistency. Even with these tradeoffs, more and more databases have started to use weaker forms of consistencies. The authors do a good job of investigating why this phenomenon is occurring and what we can learn from these eventually consistent databases.

They do so by understanding what it means to have eventual consistency and how to program with this constraint. The definition of being eventually consistent is that when there are no writes to the database, it will eventually converge so that all copies have the same data. The authors state that one easy way to do this is to write one copy and slowly copy that write over to all replicas. When there is a conflict, the write that occurred latest will be the write that the database eventually uses.

However, how long would it take for the database to become consistent? The authors show that in the real world, databases can converge very quickly. LinkedIn’s data servers will return consistent data 99.9% of the time within 13.6ms on disk and 1.63ms on solid state drives. Studies also show that Cassandra will converge to a consistent database within 200ms. When programming on these systems, the authors argue that sometimes, the inconsistencies do not matter in the short term. For example, it does not really matter if a comment in a social media site does not propagate to all servers immediately. Finally, it has been shown from a technical report at UT Austin that to not sacrifice availability, the best form of consistency is casual consistency.

This paper does a great job of explaining eventual consistency and why most workloads do not need to worry about immediate consistency. They provide concrete evidence as to why eventual consistency gives more pros in availability and performance than cons in semantic guarantees. But, I still have concerns about the following point:

1. The authors mention that even banking software can use eventual consistency in practice. I think that will major transactions, the inconsistency of not knowing the exact amount in a bank account is enough for the bank to not want eventual consistency. What happens if because of some immediate transaction, the bank loses millions of dollars?



Review 25

This paper discusses the guarantees (or lack thereof) provided by the eventually consistent model of distributed database systems. The eventual consistency model guarantees that if no additional updates are made to a given data object, all reads to that item will eventually return the same value. That is, if network partitions arise and prevent communication between servers, an individual server is still allowed to complete updates, even though this may cause temporary inconsistency in the database. This scheme prioritizes availability of data over consistency. Although this architecture does not guarantee all of the ACID properties, it is used in many commercial systems.

There are several reasons that an eventually consistent database is suitable for many distributed applications. The paper uses the example of posting a status on a social network. While network partitions may exist at the time a user posts a status, it is perfectly acceptable if that status does not appear immediately in the newsfeed of every one of that user’s friends. A delay of a few minutes in propagating the status is not easily visible to the user, whereas informing the user that network partitions exist and asking them to repeatedly try again would be frustrating and provide a poor user experience. The guarantee of eventual consistency provided by the anti-entropy system is good enough in this case.

Many metrics and mechanisms have been used to measure the time it takes for these databases to become consistent. Frequently, these architectures are consistent in a matter of milliseconds. This is fast enough to provide a high enough degree of consistency in many databases. For others, compensation is a valid strategy to make up for the latency of eventual consistency. If the frequency of database anomalies and the cost of compensating for them is lower than the cost of maintaining strict consistency in the system, programmers can build in methods to handle these edge cases. This can be difficult in some situations though, as many different anomalies can arise, and all must be accounted for. In addition to compensation methods, programmers may rely on the monotonicity of certain aspects of the system to provide safety guarantees or may use CRDTs in place of standard read/write operations. An example of a CRDT would be a counter that sends an increment operation to the database rather than reading a value, updating it, and sending the updated value. Because CRDT operations are commutative and associative, the database will remain consistent as long as all operations are applied.

My chief complaint about the paper was that they did not explain how causal consistency systems can be “bolted on” to existing eventually consistent systems. Causal consistency represents the strongest model currently available for eventual consistency. As such a powerful tool for increasing the safety provided by eventually consistent systems, it seems as though causal consistency would need to be implemented at a very low level within the architecture. The authors seem to imply that it is merely some set of functions that can be tossed in on top of an existing system as an upgrade, but I would have liked to see some of the logic behind that claim.




Review 26

This paper discusses the guarantees (or lack thereof) provided by the eventually consistent model of distributed database systems. The eventual consistency model guarantees that if no additional updates are made to a given data object, all reads to that item will eventually return the same value. That is, if network partitions arise and prevent communication between servers, an individual server is still allowed to complete updates, even though this may cause temporary inconsistency in the database. This scheme prioritizes availability of data over consistency. Although this architecture does not guarantee all of the ACID properties, it is used in many commercial systems.

There are several reasons that an eventually consistent database is suitable for many distributed applications. The paper uses the example of posting a status on a social network. While network partitions may exist at the time a user posts a status, it is perfectly acceptable if that status does not appear immediately in the newsfeed of every one of that user’s friends. A delay of a few minutes in propagating the status is not easily visible to the user, whereas informing the user that network partitions exist and asking them to repeatedly try again would be frustrating and provide a poor user experience. The guarantee of eventual consistency provided by the anti-entropy system is good enough in this case.

Many metrics and mechanisms have been used to measure the time it takes for these databases to become consistent. Frequently, these architectures are consistent in a matter of milliseconds. This is fast enough to provide a high enough degree of consistency in many databases. For others, compensation is a valid strategy to make up for the latency of eventual consistency. If the frequency of database anomalies and the cost of compensating for them is lower than the cost of maintaining strict consistency in the system, programmers can build in methods to handle these edge cases. This can be difficult in some situations though, as many different anomalies can arise, and all must be accounted for. In addition to compensation methods, programmers may rely on the monotonicity of certain aspects of the system to provide safety guarantees or may use CRDTs in place of standard read/write operations. An example of a CRDT would be a counter that sends an increment operation to the database rather than reading a value, updating it, and sending the updated value. Because CRDT operations are commutative and associative, the database will remain consistent as long as all operations are applied.

My chief complaint about the paper was that they did not explain how causal consistency systems can be “bolted on” to existing eventually consistent systems. Causal consistency represents the strongest model currently available for eventual consistency. As such a powerful tool for increasing the safety provided by eventually consistent systems, it seems as though causal consistency would need to be implemented at a very low level within the architecture. The authors seem to imply that it is merely some set of functions that can be tossed in on top of an existing system as an upgrade, but I would have liked to see some of the logic behind that claim.




Review 26

This paper talked about eventual consistency and discussed how infrastructure designer would deal with consistency, availability, and fault tolerance. For the performance and availability reason, database management systems may provide eventual consistency with few guarantees about consistency of the system. Thus, this paper discussed some questions related to this topic, including the reasons for eventual consistency, programs under eventual consistency, and possibilities to provide stronger guarantees.

First, the paper described the background and concept of eventual consistency. In DMBS nowadays, availability is an important property in order to achieve always-on experience for clients. Thus, a system may choose availability over consistency to obtain benefits of low latency. This is the reason for eventual consistency, which is a property that provides few guarantees of the DBMS, while achieving availability.

Second, the paper talked about the implementation of eventual consistency. To support distributed database systems, the paper introduced PBS models. By using PBS prediction models, the authors achieved consistent data 99.9 percent on LinkedIn’s data within 13.6ms and on SSDs within 1.63ms, which is a desirable result. To support low latency, the system had to provide consistency compensation, and this compensation ensures that mistakes were eventually correct but did not guarantee that no mistake is made. Therefore, the infrastructure designers always have to deal with the tradeoff between availability (low latency) and consistency (correctness).

The strength of this paper is that it described eventual consistency in details, including the concept, implementation and discussion. This gives the readers much knowledge about this topic. For the research on eventual consistency, I think this paper is good to study, which can give us the ideas and knowledge of eventual consistency.

The weakness of this paper is having few examples. The paper seldom uses examples to illustrate its ideas, which make it more difficult for readers to understand. For example, to introduce the concept of eventual consistency, it can use many modern database systems as an example to illustrate the need for availability and low latency.

To sum up, this paper discussed about eventual consistency and the tradeoffs between availability and consistency when designing database infrastructures.



Review 27

This paper talked about eventual consistency and discussed how infrastructure designer would deal with consistency, availability, and fault tolerance. For the performance and availability reason, database management systems may provide eventual consistency with few guarantees about consistency of the system. Thus, this paper discussed some questions related to this topic, including the reasons for eventual consistency, programs under eventual consistency, and possibilities to provide stronger guarantees.

First, the paper described the background and concept of eventual consistency. In DMBS nowadays, availability is an important property in order to achieve always-on experience for clients. Thus, a system may choose availability over consistency to obtain benefits of low latency. This is the reason for eventual consistency, which is a property that provides few guarantees of the DBMS, while achieving availability.

Second, the paper talked about the implementation of eventual consistency. To support distributed database systems, the paper introduced PBS models. By using PBS prediction models, the authors achieved consistent data 99.9 percent on LinkedIn’s data within 13.6ms and on SSDs within 1.63ms, which is a desirable result. To support low latency, the system had to provide consistency compensation, and this compensation ensures that mistakes were eventually correct but did not guarantee that no mistake is made. Therefore, the infrastructure designers always have to deal with the tradeoff between availability (low latency) and consistency (correctness).

The strength of this paper is that it described eventual consistency in details, including the concept, implementation and discussion. This gives the readers much knowledge about this topic. For the research on eventual consistency, I think this paper is good to study, which can give us the ideas and knowledge of eventual consistency.

The weakness of this paper is having few examples. The paper seldom uses examples to illustrate its ideas, which make it more difficult for readers to understand. For example, to introduce the concept of eventual consistency, it can use many modern database systems as an example to illustrate the need for availability and low latency.

To sum up, this paper discussed about eventual consistency and the tradeoffs between availability and consistency when designing database infrastructures.



Review 27

This paper is an overview of the Eventual Consistency model, which as it’s name illustrates provides eventual consistency for databases. It is an overview and introduction of the model that describes what it provides, what it doesn’t, and what can be done. It does not go into technical detail of implementation which is both a strength and weakness of the paper.

Eventual consistency is a database model that says eventually all data will be consistent so no matter what system you ask to get information from you will get the same information. The key questions with this is “How long is eventual?”. Obviously if eventual is too long this is not a feasible database setup. The benefits of eventual consistency are that it is easier to implement, and provides for a very high availability. The downside is obviously lack of consistency for a certain (relatively short) time window. Time is a very common metric for eventual consistency, as the shorter it becomes consistent the better. One implementation was able to show that data is frequently consistent within hundreds of milliseconds (99.9% after 202 ms).

In my opinion a key distinction to understand with eventual consistency is a safety versus liveness distinction. Safety guarantees that nothing bad happens, while liveness guarantees that something good will eventually happen. Eventual consistency provides so safety guarantees and is purely a liveness property model.

The last part of discussion I will talk about in this paper is if it is possible to provide any stronger guarantees without losing the benefits. This paper claims that a UT Austin report claimed that there is nothing stronger than casual consistency that is available in the presence of partitions and will provide all the upsides still. Casual consistency means that if a write is performed by X, then all reads by Y on whatever was just written by X will guarantee that anything else that Y reads will be the same value that X saw. The paper encourages us to push toward this limit and try to achieve casual consistency with all the benefits of eventual consistency.

One weakness I saw in the paper was that it lacked diagrams completely. I think a diagram in the very beginning about what eventual consistency means and an example of what it does in practice would have been helpful. I think the author chose to avoid this because he did not want to get into technical details but I think it would have helped my understanding of eventual consistency early on and without needing to go too in depth with technical details.

Overall though, I would say it was an informative paper about eventual consistency and what it is. It provided a good introduction and discussion about eventual consistency and prepared me to learn more in depth about it.



This paper is an overview of the Eventual Consistency model, which as it’s name illustrates provides eventual consistency for databases. It is an overview and introduction of the model that describes what it provides, what it doesn’t, and what can be done. It does not go into technical detail of implementation which is both a strength and weakness of the paper.

Eventual consistency is a database model that says eventually all data will be consistent so no matter what system you ask to get information from you will get the same information. The key questions with this is “How long is eventual?”. Obviously if eventual is too long this is not a feasible database setup. The benefits of eventual consistency are that it is easier to implement, and provides for a very high availability. The downside is obviously lack of consistency for a certain (relatively short) time window. Time is a very common metric for eventual consistency, as the shorter it becomes consistent the better. One implementation was able to show that data is frequently consistent within hundreds of milliseconds (99.9% after 202 ms).

In my opinion a key distinction to understand with eventual consistency is a safety versus liveness distinction. Safety guarantees that nothing bad happens, while liveness guarantees that something good will eventually happen. Eventual consistency provides so safety guarantees and is purely a liveness property model.

The last part of discussion I will talk about in this paper is if it is possible to provide any stronger guarantees without losing the benefits. This paper claims that a UT Austin report claimed that there is nothing stronger than casual consistency that is available in the presence of partitions and will provide all the upsides still. Casual consistency means that if a write is performed by X, then all reads by Y on whatever was just written by X will guarantee that anything else that Y reads will be the same value that X saw. The paper encourages us to push toward this limit and try to achieve casual consistency with all the benefits of eventual consistency.

One weakness I saw in the paper was that it lacked diagrams completely. I think a diagram in the very beginning about what eventual consistency means and an example of what it does in practice would have been helpful. I think the author chose to avoid this because he did not want to get into technical details but I think it would have helped my understanding of eventual consistency early on and without needing to go too in depth with technical details.

Overall though, I would say it was an informative paper about eventual consistency and what it is. It provided a good introduction and discussion about eventual consistency and prepared me to learn more in depth about it.



Review 28

Review 28

The theme of these two papers (the other being the Dynamo papers) is the usefulness of eventual consistency and its applications in the industry; indeed, Amazon’s Dynamo is the canonical example of why eventually consistent systems are useful in the first place. This paper discusses that the CAP Theorem (which states that partitions really step on the promise of both of consistency and availability) left DB designers to seek weaker constraints; we don’t always need consistency for a good user experience, so how can we relax that a bit? The idea is that, sure, we want to make sure that the user gets correct data at some point, but if they don’t pay attention for a bit, or if they get slightly outdated information without harming the experience, then it’s okay. The paper goes on to answer the question “okay, well when exactly should we be promising this true value to start being returned?” i.e. defining the eventual in eventual consistency.

Interestingly enough, it is possible to probabilistically bound the recency of a returned item (they refer to this concept as “probabilistically bound staleness”). Often enough, the value of a returned item is usually the most recent or “recent enough.” However, there must be some method in place to compensate for anomalies; the paper describes a somewhat unsettling example of why a bank would want to allow two simultaneous withdrawals to occur at the same time, since overdraft fees are the method of eventually reaching consistency. Interestingly, there is also a claim that causal consistency, i.e. that writes/commits that are causally dependent are always seen in the correct order, is the strongest form of consistency available (this is weaker than total consistency, which says that ALL updates are seen in the correct order, rather than the causally dependent).

This paper provided a good, broad overview on some current thoughts on eventually consistent systems. Some current issues are also briefly mentioned in the conclusion; for example, staleness guarantees are impossible, particularly in the case of specified recency (“give me the latest value”). However, I can see that needs that specifically demand temporally consistent information as a separate problem, and thus having a separate node distribution workload, which would be significantly smaller than the ordinary “eh, we don’t need it to be THAT consistent” workloads. For example, what if there is a separate set of clusters that handles keeping track of WHERE most recent values are in order to facilitate recency reads. However, this may still incur overhead, especially if such queries are not consistent across the dataset; but then again, if the workload contains a large amount of recency reads, then an eventually consistent system is probably not ideal anyway


The theme of these two papers (the other being the Dynamo papers) is the usefulness of eventual consistency and its applications in the industry; indeed, Amazon’s Dynamo is the canonical example of why eventually consistent systems are useful in the first place. This paper discusses that the CAP Theorem (which states that partitions really step on the promise of both of consistency and availability) left DB designers to seek weaker constraints; we don’t always need consistency for a good user experience, so how can we relax that a bit? The idea is that, sure, we want to make sure that the user gets correct data at some point, but if they don’t pay attention for a bit, or if they get slightly outdated information without harming the experience, then it’s okay. The paper goes on to answer the question “okay, well when exactly should we be promising this true value to start being returned?” i.e. defining the eventual in eventual consistency.

Interestingly enough, it is possible to probabilistically bound the recency of a returned item (they refer to this concept as “probabilistically bound staleness”). Often enough, the value of a returned item is usually the most recent or “recent enough.” However, there must be some method in place to compensate for anomalies; the paper describes a somewhat unsettling example of why a bank would want to allow two simultaneous withdrawals to occur at the same time, since overdraft fees are the method of eventually reaching consistency. Interestingly, there is also a claim that causal consistency, i.e. that writes/commits that are causally dependent are always seen in the correct order, is the strongest form of consistency available (this is weaker than total consistency, which says that ALL updates are seen in the correct order, rather than the causally dependent).

This paper provided a good, broad overview on some current thoughts on eventually consistent systems. Some current issues are also briefly mentioned in the conclusion; for example, staleness guarantees are impossible, particularly in the case of specified recency (“give me the latest value”). However, I can see that needs that specifically demand temporally consistent information as a separate problem, and thus having a separate node distribution workload, which would be significantly smaller than the ordinary “eh, we don’t need it to be THAT consistent” workloads. For example, what if there is a separate set of clusters that handles keeping track of WHERE most recent values are in order to facilitate recency reads. However, this may still incur overhead, especially if such queries are not consistent across the dataset; but then again, if the workload contains a large amount of recency reads, then an eventually consistent system is probably not ideal anyway


Review 29

Review 29

This article talks about eventual consistency: from its history&concepts to its implementation, how to measure consistency, as well as how it affects programming for application designer in designing the system compensation and how can it be improved even more. It all started with Brewer’s CAP theorem, which basically states that in partitioned storage, there is trade-off between availability and consistency. Knowing this, distributed database designers are seeking models that would enable both availability and high performance (consistency).

First, the paper starts with explanation of eventual consistency concept: anti-entropy, asynchronous broadcast, background replica updates, and managing concurrent write by determining the “winning” value. It discusses the safety holes in the concept as well. Then it discusses how to measure the “eventuality” (the most commonly used are time and versions). Another interesting way is using Probabilistic Bounded Staleness (PBS), which is calculated based on the rate of anti-entropy protocol. An example of PBS implementation on LinkedIn and Yammer is given. Then, the paper continues with how should one program under eventual consistency. Eventual consistency shifts the burden of avoiding reading stale data to programmer’s shoulder, since in eventual consistency it is a bit difficult to determine the validity of given data. Fortunately, the paper gives solution through CALM theorem and ACID 2.0 principles. Last, the paper discusses the possibility to provide stronger guarantees than eventual consistency without losing its benefits. The paper proposes causal consistency as future model. Nevertheless, the paper also recognize the limitation of implementing weakly/eventual consistency.

One major contribution of this article is it provides the pragmatic approach in understanding the concept of eventual consistency. It is not aimed to discuss the theory of eventual consistency, but on how and why eventual consistency are programmed, deployed, and (have been/would be) evolved. It is also nice that the paper addresses the issues and consequences that are faced by application designer from implementing eventual consistency, as well as the alternative solution.

However, in explaining the “compensation by design”, I think the paper oversimplifies the method in incorporating the effect of eventual consistency to application design (in this case, using the simle increment number example). The complexity and the need of consistent data faced by programmer would not always be that simple, and I think it won’t always be solvable. Another thing is, I think the paper tries too hard to show that eventual consistency is okay for every system as long as it is compensated well. The writers point out that even banking system – the most commonly used examples for database consistency – eventually uses socio-technical strategy (overdraft fee) to resolve the inconsistency issue. Still, that does not mean that eventual consistency is suited for banking system. --



This article talks about eventual consistency: from its history&concepts to its implementation, how to measure consistency, as well as how it affects programming for application designer in designing the system compensation and how can it be improved even more. It all started with Brewer’s CAP theorem, which basically states that in partitioned storage, there is trade-off between availability and consistency. Knowing this, distributed database designers are seeking models that would enable both availability and high performance (consistency).

First, the paper starts with explanation of eventual consistency concept: anti-entropy, asynchronous broadcast, background replica updates, and managing concurrent write by determining the “winning” value. It discusses the safety holes in the concept as well. Then it discusses how to measure the “eventuality” (the most commonly used are time and versions). Another interesting way is using Probabilistic Bounded Staleness (PBS), which is calculated based on the rate of anti-entropy protocol. An example of PBS implementation on LinkedIn and Yammer is given. Then, the paper continues with how should one program under eventual consistency. Eventual consistency shifts the burden of avoiding reading stale data to programmer’s shoulder, since in eventual consistency it is a bit difficult to determine the validity of given data. Fortunately, the paper gives solution through CALM theorem and ACID 2.0 principles. Last, the paper discusses the possibility to provide stronger guarantees than eventual consistency without losing its benefits. The paper proposes causal consistency as future model. Nevertheless, the paper also recognize the limitation of implementing weakly/eventual consistency.

One major contribution of this article is it provides the pragmatic approach in understanding the concept of eventual consistency. It is not aimed to discuss the theory of eventual consistency, but on how and why eventual consistency are programmed, deployed, and (have been/would be) evolved. It is also nice that the paper addresses the issues and consequences that are faced by application designer from implementing eventual consistency, as well as the alternative solution.

However, in explaining the “compensation by design”, I think the paper oversimplifies the method in incorporating the effect of eventual consistency to application design (in this case, using the simle increment number example). The complexity and the need of consistent data faced by programmer would not always be that simple, and I think it won’t always be solvable. Another thing is, I think the paper tries too hard to show that eventual consistency is okay for every system as long as it is compensated well. The writers point out that even banking system – the most commonly used examples for database consistency – eventually uses socio-technical strategy (overdraft fee) to resolve the inconsistency issue. Still, that does not mean that eventual consistency is suited for banking system. --



Review 30

Review 30

This paper was some sort of article that provided an overview of eventual consistency and the research surrounding it. The purpose of this article is to present the topic of eventual consistency and discuss current research surrounding its use in production database systems, as well as present some ways in which eventual consistency can be modified for use in systems that demand a stronger version of weak consistency.

This article is more of a literature review article, and the main contribution comes from the presentation of these various works to the community in a single article. I think that some important works that the article brings to light are those regarding “how eventual” eventual consistency is. They cite some existing work that has developed performance measurements of eventual consistency to determine what kinds of performance improvements are seen when a system is moved to an eventually consistent model. They also discuss more recent research that has integrated compensation mechanisms so that eventually consistent models can be used in systems that demand a higher level of fidelity when it comes to their data on disk or the data they report to users. The authors finish off this article with a discussion of some more recent research they have done that involves causal consistency, which makes some concessions, but not as many as eventual consistency goes, while still achieving comparable speeds-ups and data availability as eventual consistency, but with increased safety guarantees. Though they seem to have answered many of the questions and concerns about eventual consistency with this last, new contribution, they end the article by taking a step back and acknowledging that they are still working in a weakly consistent system and thus, there are limits to how many improvements can be made.

As far as strengths go, I think this paper does a good job of presenting a lot of the relevant research surrounding eventual consistency, including measurements and implementations of it in the wild. I think that the examples presented are very useful as well (even Amazon’s Dynamo is mentioned!). In my opinion, the author’s also do a good job of pointing out weaknesses in this idea where they are present, while also trying to debunk common misconceptions about eventually consistent systems.

As far as weaknesses go, I think that the paper could have presented some more detail about some of the theoretical results presented (like the UT Austin result presented that proves that causal consistency is as good as you can get without violating the high availability requirements or consistency between two replicas). This seems to be a major result that I wish got more coverage. Other than that, this didn’t seem to be a technical paper at all until we got to the last couple sections. I wish there had been more of a roadmap for the reader at the beginning, so I knew more about what was coming on the technical contribution spectrum, not just on the overview of eventual consistency (which they do provide a roadmap for).



This paper was some sort of article that provided an overview of eventual consistency and the research surrounding it. The purpose of this article is to present the topic of eventual consistency and discuss current research surrounding its use in production database systems, as well as present some ways in which eventual consistency can be modified for use in systems that demand a stronger version of weak consistency.

This article is more of a literature review article, and the main contribution comes from the presentation of these various works to the community in a single article. I think that some important works that the article brings to light are those regarding “how eventual” eventual consistency is. They cite some existing work that has developed performance measurements of eventual consistency to determine what kinds of performance improvements are seen when a system is moved to an eventually consistent model. They also discuss more recent research that has integrated compensation mechanisms so that eventually consistent models can be used in systems that demand a higher level of fidelity when it comes to their data on disk or the data they report to users. The authors finish off this article with a discussion of some more recent research they have done that involves causal consistency, which makes some concessions, but not as many as eventual consistency goes, while still achieving comparable speeds-ups and data availability as eventual consistency, but with increased safety guarantees. Though they seem to have answered many of the questions and concerns about eventual consistency with this last, new contribution, they end the article by taking a step back and acknowledging that they are still working in a weakly consistent system and thus, there are limits to how many improvements can be made.

As far as strengths go, I think this paper does a good job of presenting a lot of the relevant research surrounding eventual consistency, including measurements and implementations of it in the wild. I think that the examples presented are very useful as well (even Amazon’s Dynamo is mentioned!). In my opinion, the author’s also do a good job of pointing out weaknesses in this idea where they are present, while also trying to debunk common misconceptions about eventually consistent systems.

As far as weaknesses go, I think that the paper could have presented some more detail about some of the theoretical results presented (like the UT Austin result presented that proves that causal consistency is as good as you can get without violating the high availability requirements or consistency between two replicas). This seems to be a major result that I wish got more coverage. Other than that, this didn’t seem to be a technical paper at all until we got to the last couple sections. I wish there had been more of a roadmap for the reader at the beginning, so I knew more about what was coming on the technical contribution spectrum, not just on the overview of eventual consistency (which they do provide a roadmap for).



Review 31

Review 31

Review: Eventual Consistency Today: Limitations, Extensions, and Beyond

Paper Summary:
This paper aims at following up a previously proposed theorem by presenting some notable developments in the previous theory and practice of eventual consistency so that it answers the question of why such a theory with lack of useful guarantees can have many usable applications and profitable business built on top of it.
The paper focuses on answering three questions: how eventual is the eventual consistency, how should one program under eventual consistency, and is it possible to provide stronger guarantees than eventual consistency without losing its benefits. I think these are the most key questions in reflecting the value of the eventual consistency theory. By answering these three questions the paper provides a strong argument to solve the seemed contradiction mentioned at the beginning of the paper and reflected the value of the theory.

Paper Details:
To answer the first question, the author discussed some experimental results to support the statement. While the results shows that it is, in most cases, not too long before the system reaches consistency, there is actually no theoretical bound for this duration. Therefore one can still argue that there could be cases, however rare they are, that the system can take extremely long time before a write can be made visible to readers. It may still be a question how reliable this system can be. Because during this duration, for users with a read request, the system is in practice unavailable and thus leaves little point for making a trade-off of consistency for reliability.

In the later part of the paper, it describes on going work of overcoming the limits. However the paper still acknowledges the existence of limits with the currently available techniques.



Review: Eventual Consistency Today: Limitations, Extensions, and Beyond

Paper Summary:
This paper aims at following up a previously proposed theorem by presenting some notable developments in the previous theory and practice of eventual consistency so that it answers the question of why such a theory with lack of useful guarantees can have many usable applications and profitable business built on top of it.
The paper focuses on answering three questions: how eventual is the eventual consistency, how should one program under eventual consistency, and is it possible to provide stronger guarantees than eventual consistency without losing its benefits. I think these are the most key questions in reflecting the value of the eventual consistency theory. By answering these three questions the paper provides a strong argument to solve the seemed contradiction mentioned at the beginning of the paper and reflected the value of the theory.

Paper Details:
To answer the first question, the author discussed some experimental results to support the statement. While the results shows that it is, in most cases, not too long before the system reaches consistency, there is actually no theoretical bound for this duration. Therefore one can still argue that there could be cases, however rare they are, that the system can take extremely long time before a write can be made visible to readers. It may still be a question how reliable this system can be. Because during this duration, for users with a read request, the system is in practice unavailable and thus leaves little point for making a trade-off of consistency for reliability.

In the later part of the paper, it describes on going work of overcoming the limits. However the paper still acknowledges the existence of limits with the currently available techniques.



Review 32

Review 32

This paper further discusses the advanced topics related to eventual consistency, including 1) the real world performance of eventual consistency, 2) how to program with eventual consistent infrastructure and 3) the possibility of extending eventual consistency with higher consistency level while maintaining its high availability property.

To discuss 1), it first introduces the history and concepts related to eventual consistency, and then explains the differences between eventual consistency and Single-system Image(SSI) which on the other hand represent a strong consistency state of distributed system. Then it introduces the common implementation details to archive eventual consistency, and points out the hardness of covering corner-cases to deal with complicated synchronization scenario.

To discuss 2), after introducing some related concepts with eventual consistency, it first discuss its implementation of Probabilistically bounded staleness that measure the expectation of recency for reads of data items on Cassandra, and illustrates the possibility of improving the performance of eventual consistency with PBS.
Then it compares the compensation, costs and benefits of eventual consistency, and then proposed the computations that with CALM and ACID 2.0 properties whose compensation can be handled by design.

To discuss 3), it first introduces Commutative, replicated data types (CRDTs) that including a variety of standard data types can provide eventual consistency consistent data structures, by using which users would never violate any safety issues. Then it gives a example about distributed incremental on the same object, instead of executing the incremental simultaneously, CRDTs are going to cumulative the incremental and then summing the result at final, which overcome the issue of racing condition, and solve the problem of eventual consistency. It also discuss the causality of operations and advanced transaction algorithm the authors believe can push the limits of eventual consistency.


Strengths:
1. This paper gives a lot of definitions of high level terminology in distributed system, including CAP theorem, Liveness and Safety, CALM theorem, ACID 2.0 and CRDTs. Moreover, it gives detail introduction along with rule of thumb of applying these theorems, for example, it states that CAP theorem you can trade of Consistency with Availability but you cannot sacrifice partition tolerance.
2. This paper gives great overview of the eventual consistency, and did substantial experiment with PBS and implementation it with real world project(Cassandra), which shows great novelness and impact.

Weakness:
1. Although this paper gives great introduction of its PBS algorithm, and states that their PBS implementation shows great improvement with experiments in Linkedin and yammer, they does not show the detail design and any graph or detail representation of the experiment, which I think if provided would be more convincing.
2. Although it shows the great potential of conducting compensation on framework instead of pushing the problem to client-side, the computation that can be compensated on server side is limited and all others conflict is still needed to push to client side. However, the paper only gives a small portion of discussion about the computation that needed to push to client side, and does not provide any detail introduction about the implementation on client side.



This paper further discusses the advanced topics related to eventual consistency, including 1) the real world performance of eventual consistency, 2) how to program with eventual consistent infrastructure and 3) the possibility of extending eventual consistency with higher consistency level while maintaining its high availability property.

To discuss 1), it first introduces the history and concepts related to eventual consistency, and then explains the differences between eventual consistency and Single-system Image(SSI) which on the other hand represent a strong consistency state of distributed system. Then it introduces the common implementation details to archive eventual consistency, and points out the hardness of covering corner-cases to deal with complicated synchronization scenario.

To discuss 2), after introducing some related concepts with eventual consistency, it first discuss its implementation of Probabilistically bounded staleness that measure the expectation of recency for reads of data items on Cassandra, and illustrates the possibility of improving the performance of eventual consistency with PBS.
Then it compares the compensation, costs and benefits of eventual consistency, and then proposed the computations that with CALM and ACID 2.0 properties whose compensation can be handled by design.

To discuss 3), it first introduces Commutative, replicated data types (CRDTs) that including a variety of standard data types can provide eventual consistency consistent data structures, by using which users would never violate any safety issues. Then it gives a example about distributed incremental on the same object, instead of executing the incremental simultaneously, CRDTs are going to cumulative the incremental and then summing the result at final, which overcome the issue of racing condition, and solve the problem of eventual consistency. It also discuss the causality of operations and advanced transaction algorithm the authors believe can push the limits of eventual consistency.


Strengths:
1. This paper gives a lot of definitions of high level terminology in distributed system, including CAP theorem, Liveness and Safety, CALM theorem, ACID 2.0 and CRDTs. Moreover, it gives detail introduction along with rule of thumb of applying these theorems, for example, it states that CAP theorem you can trade of Consistency with Availability but you cannot sacrifice partition tolerance.
2. This paper gives great overview of the eventual consistency, and did substantial experiment with PBS and implementation it with real world project(Cassandra), which shows great novelness and impact.

Weakness:
1. Although this paper gives great introduction of its PBS algorithm, and states that their PBS implementation shows great improvement with experiments in Linkedin and yammer, they does not show the detail design and any graph or detail representation of the experiment, which I think if provided would be more convincing.
2. Although it shows the great potential of conducting compensation on framework instead of pushing the problem to client-side, the computation that can be compensated on server side is limited and all others conflict is still needed to push to client side. However, the paper only gives a small portion of discussion about the computation that needed to push to client side, and does not provide any detail introduction about the implementation on client side.



Review 33

Review 33

This paper introduces eventual consistency, a weak consistency model that are widely adopted in large scale distributed systems. As opposed to strong consistency model, eventual consistency provides few guarantees. It only guarantees that all reads to a data item will eventually return the same value at some time. The eventual consistency is proposed based on a notable conjecture that it’s impossible to achieve high availability and strong consistency at the same time. Thus, to provide “always-on” service, people has to sacrifice consistency. In practice, the eventually consistent system returns last updated value for read access eventually. Though this is a fairly weak consistency model, base one the observation of several production systems, the data items converges really fast, in the magnitude of a few hundred milliseconds. When programming around consistency anomalies, compensation techniques are used to make up the mistakes.The programs that are safe under eventual consistency model can be formally captured by the CALM theorem(consistency as logical monotonicity). The monotonicity can be expressed as a space of design patterns with properties consisted of associativity, commutativity and idempotence. By leveraging the CALM, data store and application level consistency are separated. The inconsistency of underlying data store doesn’t affect higher level application invariants. There are also a few models providing stronger consistency than eventual consistency, such as causal consistency.

The main advantage of eventual consistency is that it largely improves the availability, while still provides a “good enough” consistency guarantee that works well in production systems. By relaxing the consistency model, it can also ease the work of development by avoiding the programming for variants of corner-cases.

The weakness of eventually consistency is also obvious, namely few guarantees. It is a trade-off of high availability. For systems that are with a relaxed request latency requirement or with a strong data safety requirement, using eventual consistency is not appropriate. For example, the high frequency trading industry has a much more strict requirement on consistency. A stronger consistency model, or even SSI is more suitable for this class of workloads.


This paper introduces eventual consistency, a weak consistency model that are widely adopted in large scale distributed systems. As opposed to strong consistency model, eventual consistency provides few guarantees. It only guarantees that all reads to a data item will eventually return the same value at some time. The eventual consistency is proposed based on a notable conjecture that it’s impossible to achieve high availability and strong consistency at the same time. Thus, to provide “always-on” service, people has to sacrifice consistency. In practice, the eventually consistent system returns last updated value for read access eventually. Though this is a fairly weak consistency model, base one the observation of several production systems, the data items converges really fast, in the magnitude of a few hundred milliseconds. When programming around consistency anomalies, compensation techniques are used to make up the mistakes.The programs that are safe under eventual consistency model can be formally captured by the CALM theorem(consistency as logical monotonicity). The monotonicity can be expressed as a space of design patterns with properties consisted of associativity, commutativity and idempotence. By leveraging the CALM, data store and application level consistency are separated. The inconsistency of underlying data store doesn’t affect higher level application invariants. There are also a few models providing stronger consistency than eventual consistency, such as causal consistency.

The main advantage of eventual consistency is that it largely improves the availability, while still provides a “good enough” consistency guarantee that works well in production systems. By relaxing the consistency model, it can also ease the work of development by avoiding the programming for variants of corner-cases.

The weakness of eventually consistency is also obvious, namely few guarantees. It is a trade-off of high availability. For systems that are with a relaxed request latency requirement or with a strong data safety requirement, using eventual consistency is not appropriate. For example, the high frequency trading industry has a much more strict requirement on consistency. A stronger consistency model, or even SSI is more suitable for this class of workloads.