Review for Paper: 29-Eventual Consistency Today: Limitations, Extensions, and Beyond

Review 1

This paper gives a broad discussion on eventual consistency, with focus on the following three questions: what is eventual consistency / how should one program under eventual consistency / how to provide stronger (safety) guarantees than eventual consistency without losing benefits (availability).

The paper first introduces the concept and motivations for eventual consistency, benefits including high availability, easy implementation, liveness property; the main drawback is no safety guarantees. One thing to mention is, eventual consistency works well in practice, and often behaves like strong consistency in production stores.

Then, the paper discusses programming on eventual consistency model. The discussion is based on the trade-off between compensation, costs and benefits. Designers should decide whether to use eventual consistency based on use case. Work on CRDTs seperate data store and application-level consistency concerns, and can help get application-level consistency.

Finally, the paper discusses possibilities to strengthen eventual consistency by adding safety guarantees. One key argument is a claim that no consistency model stronger than causal consistency is available in the presence of partitions.

I like this paper's structure. It lists the three questions to discuss, and discuss from history and motivation, to use cases of eventual consistency, to limits of eventual consistency. The paper structure is progressive, which is clear and easier for readers to understand. The part I like most is the discussion on tradeoff between benefit and cost, which is a very practical problem to think about in real applications, and I like the examples with banking service (helps to understand the problem and discussion).

My main takeaway is, this paper provides me with an insight into eventual consistency's limitations and how they can be useful. The discussion on CAP impossibility also teaches me about the fundamentals of distributed system's limit.


Review 2

Distributed services are widely applied to industry and require high availability and performance to meet the growing customer demand. People realize that there exists a tradeoff between consistency and performance. This paper presents eventual consistency, a consistency model used in distributed computing to achieve high availability that informally guarantees that, if no new updates are made to a given data item, eventually all accesses to that item will return the last updated value.

Some of the strengths of this paper are:
1. Eventual consistency is easier to implement. It simplifies the design and operation of distributed services.
2. Eventual consistency improves availability and performance of distributed services.

Some of the drawbacks of this paper are:
1. Eventual consistency doesn’t guarantee ACID properties.
2. Eventual consistency is purely a liveness guarantee (reads eventually return the same value) and does not make safety guarantees. An eventually consistent system can return any value before it converges
3. The paper is not presenting well in a logical, organized way and there’s no experimental result presented to support the theory arguments.



Review 3

As has been proved by CAP (consistency, availability, and partition tolerance) theorem, distributed systems requiring always-on, highly available operation cannot guarantee the illusion of coherent, consistent single-system operation in the presence of network partitions. Therefore, it is reasonable to drop strong guarantees in favor of weaker model for better performance, and here comes the eventual consistency.

Eventual consistency provides few guarantees: If no additional updates are made to a given data item, all reads to that item will eventually return the same value. At no given time can the user rule out the possibility of inconsistent behavior: the system can return any data and still be eventually consistent—as it might “converge” at some later point. The only guarantee is that, at some point in the future, something good will happen.

The main contributions of the paper are three takeaways which can help practitioners running distributed systems:
1. New prediction and measurement techniques allow system architects to quantify the behavior of real-world eventually consistent systems. When verified via measurement, these systems appear strongly consistent most of the time.
2. New system architects are able to deal with inconsistencies, either via external compensation outside of the system or by limiting themselves to data structures that avoid inconsistencies altogether.
3. it’s possible to achieve the benefits of eventual consistency while providing substantially stronger guarantees, including causality and several ACID (atomicity, consistency, isolation, durability) properties from traditional database systems while still remaining highly available.

The main advantages of this paper are
1. It is a pragmatic introduction to several developments on the cutting edge of our understanding of eventually consistent systems.
2. It provides the necessary background for understanding both how and why eventually consistent systems are programmed, are deployed, and have evolved, as well as further directions.

The main disadvantage of the paper is it did not present the disadvantage of eventual consistency. Difficult of developing against that (and the natural reaction of just "handwaving" the consistency model rather than using optimistic locking with a vector clock at the identified synchronization points) is the main one. Besides, eventual consistency requires that writes to one replica will eventually appear at other replicas, and that if all replicas have received the same set of writes, they will have the same values for all data. This weak form of consistency does not restrict the ordering of operations on different keys in any way, thus forcing programmers to reason about all possible orderings and exposing many inconsistencies to users. 


Review 4

Problem & motivations:
The CAP (consistency, availability, and partition tolerance) theorem changes the landscape of how distributed storage systems were architected. In order to achieve high availability and partition tolerance, many distributed-system architects dropped “strong” guarantees in favor of weaker models. Among them, the most notable is eventual consistency.

Main Contribution:
Eventual consistency, as its name says, means all servers will eventually reach the same result. In other words, if no additional updates are made to a given data item, all reads to that item will eventually return the same value. It provides very weak consistency and can cause inconsistent reading and writing. Yet, two main benefits are 1. It provides high performance and availability. 2. The model itself is fairly easy to implement as long as you exchange the information with others. However, noticeable, it does not provide the SSI eventually, which means the final result will not be serializable. And the paper also proposes some stronger models, like the causal consistency.

Drawbacks:
The paper on the journey is always much better to read than the paper at the conference. If the author can provide with an example of a famous eventual consistency, it will be better for us to understand what is a typical environment for users to apply this.



Review 5

This paper speaks on eventually consistent systems, which is a notable distributed systems model with weak guarantees of reading correct data. These weak models became a concern after Eric Brewer’s claim that “distributed systems requiring always on, highly available operation cannot also provide consistent single-system behaviors given that there are network partitions between active servers.” Eventually consistent systems provide the weak guarantee that if a data object is not update beyond a certain point in time, then with enough reads you will converge to the correct value. This is an important contribution because many systems are built on top of this model, which trades off stronger guarantees for efficiency.

The paper introduces the idea of maintaining a single-system image (SSI) in a distributed system. An SSI cannot be maintained if there are partitions between nodes and a loss of connection occurs in the even of node failure, where data update propagation must block other transactions. Eventual consistency does not maintain an SSI.

The implementation of this model is done by having replicas of data perform anti-entropy, which is a propagation of information on which write updates that node has seen. Eventual consistency does not guarantee safety but instead guarantees that at some point in time, the system will be “good”. The paper introduces probabilistic methods for determining how long data may be stale in a system so we know when it will be usable.

I liked how the authors used accessible language to overview the idea of eventual consistency but I felt that at some parts it came at the cost of a deeper explanation. I felt that using terms like the system will be “good eventually” in multiple places made me feel as if two challenges were restating similar things.



Review 6

Eventual Consistency Today: Limitations, Extensions, and Beyond

This paper focuses on eventually consistency. Eventually consistency is a sort of weak consistency which has the following properties: the storage system ensures that if there is no commit for the latest update, eventually all visits will get the eventual update. If there is no failure, the size of the window of consensus violations depend on the communication latency, number of replicas and loads. The most eventual thing is all the replicas will see the update.

There are three main problems that this paper focuses on: 1: the definition of eventuality of eventual consistency. 2 the guidance of one program under eventual consistency. 3 the possibility of providing stronger guarantees than eventual consistency without losing its benefits. According to CAP theorem, it is impossible to achieve always-on experience and ensuring that users read the latest written version in a distributed database during partial failure. So people came up with weak consistency and found that can also offer availability and high performance.

The contribution of this paper is to give an overall survey and insight of eventual consistency which improves the availability and the performance. The eventual consistency is more and more important as the volume of cluster system grows, because string consistency will cause low availability and bad performance, the trade off is that we may need to accept the short consistency violation.

The advantages of eventual consistency according to this paper is that it is easily implemented and does not need to pay attention to the corner cases in dealing with complex scenarios. Also in eventual consistency, though "eventually" is a weak guarantee, it still can ensure that nearly all of the updating operation will happen in a short amount of time which means that "eventually" is under controlled.

One thing to improve in this paper, though it is a survey paper, is that the paper should better list some examples to illustrate. It is relatively abstract.


Review 7

This paper introduced how eventual consistency born, developed and how it is today. By simplifying the design and operation of distributed services, eventual consistency improves availability and performance at the cost of semantic guarantees to applications. While eventual consistency is a particularly weak property, eventually consistent stores often deliver consistent data.

This paper mainly talked about three questions. How eventual is eventual consistency? How should one program under eventual consistency? And is it possible to provide stronger guarantees than eventual consistency without losing its benefits?

One important concept CAP theory should be well known: A distributed system can only meet at most two of Consistency, Availability, and Partition tolerance. If a system does not have to consider network partition fault tolerance, it can achieve data consistency and availability at the same time, which can usually be guaranteed by processing protocols. However, regardless of network partition fault tolerance, is it called a distributed system? Therefore, in a distributed system, data consistency and availability cannot be guaranteed at the same time.


Eventual consistency is useful since due to the large scale of today's application, eventual consistency can make the application respond to user faster than before. While eventual consistency is relatively easy to achieve, the current definition leaves some unfortunate holes. The difficulty with eventual consistency is that it makes no safety guarantees—eventual consistency is purely a liveness property. For meaningful guarantees, safety and liveness properties need to be taken together. Despite the lack of safety guarantees, eventually consistent data stores are widely deployed.

As for metric of eventual consistency, time and versions are perhaps the most intuitive ones, there are a range of others, such as numerical drift from the "true" value. Application may require safety property in its use. There is a knowledge about how to program eventually consistent stores. Programming around consistency anomalies is similar to speculation: you don't know what the latest value of a given data item is, but you can proceed as if the value presented is the latest. Recent research has provided "compensation-free" programming for many eventually consistent applications. CALM means consistency as logical monotonicity, it means that programs that are monotonic and do not ever go in the other way.

The main contribution of this paper is that is showed readers a big picture of the eventual consistency. Also it discussed some implentation details about the eventual consistency. This is not a thorough survey paper, the good point is that it discuss the eventual consistency in a view of applicaionts and developers.

One of the weak point is that this paper does not have a very good structure and also it is more like a engineering paper but not a academic one.




Review 8

The paper starts by stating the CAP theorem, which I hadn’t seen before. To paraphrase, in the presence of network partitions, highly available distributed database systems cannot guarantee the illusion of behaving like a single system. Of course, this is a very big problem, as a typical expectation of distributed database systems is that they will appear to the user as if they are a single database. This leads to the introduction of the primary subject of the paper, eventual consistency. Eventual consistency means that if there are no further updates, the database will at some point in the future become consistent. It is popular in industry as it can significantly improve latency, even though it has some undesirable properties.

The authors describe how truly weak eventual consistency is - it gives no guarantees, for instance, about *when* data will be consistent. It is even technically possible under eventual consistency that a value that was never written is returned, as long as the final state is consistent. One way to measure the success of eventual consistency, Probabilistically Bounded Staleness, is introduced. It measures how likely it is for a read to return the most recent data within a certain period of time, and was tested with the workloads of companies like LinkedIn. The strong numbers show why eventual consistency is typically acceptable in industry.

Compensation is described (using a very clear ATM example) to show how mistakes can be corrected, assuming that we can’t stop them from happening. Additionally, the CALM theorem and CRDTs are introduced as ways of forcing consistency. The authors do a good job describing use cases in which consistency is important, and when it isn’t - for example, we don’t care much about consistency for twitter updates. There was an idea brought up at the end of the paper that I thought was interesting, mainly drawing parallels between eventual consistency and some other issues in database systems. In this course, we have frequently discussed how most deployments of database systems do not actually use serializable concurrency control. The authors have done some work showing that some weaker models can actually be achieved in distributed systems with high availability. I think that this provides a great path for future work, as it is well known that database users are happy to utilize these models.

This paper had a significantly different format than others that we have looked at during the semester. It isn’t quite a survey paper, but reads a bit like one, as its goal is not to introduce novel solutions. It was published in the ACM Queue magazine, which makes me think that it is perhaps intended for a wider audience than papers published in venues such as SIGMOD or VLDB. Perhaps it was because it was a magazine style article, but there were no diagrams to support any of the explanations. I thought these would have been useful in a few areas - for instance when explaining causal consistency.



Review 9

This article describes the eventual consistency model and focuses on immediately applicable takeaways for practitioners. Unlike a survey paper which gives an overview of the all the literature surrounding eventual consistency, this article provides the background for understanding the motivation of eventual consistency and how to program and deploy an eventual consistency system.

Eventual consistency model is a result of the CAP theorem, which states that it’s impossible to simultaneously achieve availability and consistency in the presence of network partitions. Since one can’t sacrifice the partition tolerance, a choice must be made between availability and consistency. If one application wants high availability, it has to use a lower consistency and one choice is eventual consistency.

Eventual consistency only guarantees that with no additional updates, all read to an item will eventually return the same value, which is only a liveness property. There’s no safety guarantee, which means it allows the read operation to return a value that is never written to the database! Therefore, though eventual consistency is defined that way, any non-trivial implementations should provide both safety and liveness. Also, measurements have shown that most implementation of eventual consistency is actually often strongly consistent (the time needed to converge is very small, only tens or hundreds of milliseconds).

The implementation of eventual consistency has two important components. First, to ensure convergence, replicas must exchange information with one another about which writes they have seen. This step is called anti-entropy and should be done in an asynchronous process. Second, the logic used to compensate for incorrect actions made using inconsistent data must be provided. This requirement, however, is sometimes hard to achieve, as the engineer need to think through each possible sequence of anomalies and figure out the correct compensation logic. Fortunately, research and prototypes for building eventually consistent data types and programs can ease the burden of reasoning about these anomalies.

Actually, a stronger consistency model that is available in the presence of partitions is achievable, as long as it’s not stronger than causal consistency. Existing DBMS can also be made highly available under weak isolation levels.

This is a very good article for anyone who is new to the eventual consistency model. It provides clear explanations and example and quite easy to understand. It will be better if it can provide more information about implementing eventual consistency. For example, analyzing a simplified system to see how eventual consistency is achieved.


Review 10

In the paper "Eventual Consistency Today: Limitations, Extensions, and Beyond", Peter Bailis and Ali Ghodsi discuss the advantages and disadvantages of eventually consistent infrastructures. Eric Brewer, VP at Google, predicted that distributed systems requiring always-on, highly available operations cannot guarantee the illusion of coherent, consistent single-system operations. Network partitions between these distributed systems severely cut communication between active servers. What followed was many architectures dropped "strong" guarantees in favor of weaker models - namely, eventual consistency. Eventual consistency really only says one thing - all reads to a data item will eventually be the same, but this point of convergence is unknown. The main goal of this retrospective paper is to answer three questions:
1) How eventual is eventual consistency?
2) How does one program under this model?
3) Can we have a strong model without losing its benefits?
It should be noted that Amazon uses this model to address their customer philosophy and workload needs. Since use of eventual consistency in the industry has been proven on the higher end, discussing the benefits of eventual consistency is an important observation.

The paper is divided into several sections:
1) Eventual Consistency History/Concepts
a) Available Alternatives: Impossible to be both available and consistent in the presence of failures. Systems that are highly available eventually converge to a single value, but this value is not specified. Furthermore, the window of time for this to occur is unspecified as well.
b) Implementation: Information exchange of writes occurs: anti-entropy. Uses an asynchronous-to-all broadcast of the last written value as a background process. Thus, we don't need to write difficult edge case code.
c) Safety and Liveness: Safety means nothing bad happens. Liveness means something good happens eventually. This puts eventual consistency as a bare minimum requirement for safety.
2) Eventual
a) Metrics + Mechanisms: Time -> how long does it take users to see their writes? Versions -> How many versions exist? Measurement vs Prediction describes consistency now vs in the future.
b) Probabilistic Bounded Staleness: Provides an expectation of recency for reads of data items. This allows us to measure how far an eventually consistent system deviates from that of a strongly consistent, linearizable system.
c) Strongly Consistent: Eventually consistent latency = 500ms but Strongly consistent latency = 12 seconds (Amazon).
3) Programming
a) Costs and Benefits: We have to decide whether we want users to have a better experience or a "correct" experience.
b) Compensation: In the case of a false view for a user, programmers need to issue apologies - much more annoying than implementing consistency.
4) Stronger than Eventual
a) Limitations: No consistency model stronger than causal consistency is available in the presence of partitions. This means there is an upper bound on a very familiar consistency model. However, recent research shows that weaker models can be implemented in a distributed environment while providing high availability.

Much like other papers, there were some drawbacks. The first drawback I noticed was that the paper did not discuss eventual consistency with frequent updates. It seems that if a particular value is always updated, the system will never reach a consistent state - contradictory to its definition. Furthermore, a user may never see the correct updates, a violation of Amazon's customer philosophy. The second drawback is that the paper did not describe the pain points of programming eventually consistent systems at the human level. Such a system puts a big responsibility on programming to be both efficient and error-free, something that we all know will never happen. Google's official statement on eventually consistent systems supports this fact: "We think this is an unacceptable burden to place on developers and that consistency problems should be solved at the database level".


Review 11

This paper describes the qualities of eventually consistent DBMSs. While consistency can be an important ACID property, ensuring consistency often means that the DBMS responds to requests slower, especially in a highly distributed DBMS. As such, when response time is important, such DBMSs will forgo true consistency in favor of an inconsistent database where inconsistencies are eventually, not immediately, resolved.

Consistency in a distributed database is usually limited by communication among the various replicas. As such, while consistent data may not be immediately available, replicas can constantly exchange info to “reverse entropy”. While this eventual consistency is a very weak guarantee for data safety, it tends to work quite well in practice. One way of getting a better guarantee is with probabilistically bounded staleness; i.e., a given amount of time after a change, a guaranteed proportion of replicas are guaranteed to be consistent.

Generally, the way of working with such a database is to assume any data found is consistent with the rest of the replicas, and correct later if that isn’t the case. In practice, such corrections are rather rare. However, implementing correction code can be a substantial burden on the programmer. As such, there are generally two solutions:

Don’t correct anything, if errors are generally minor.
Use operations that are always correct regardless of their order, so corrections are trivial.

The second case applies to operations that are associate and commutative, so they can be moved freely, and idempotent, so that duplicate operations are safe. This properties are referred to as ACID 2.0.

This paper is able to describe trading off consistency for quick access, which is extremely important in large systems used by many people simultaneously. The automatic methods of handling conflicts help reduce the programming and error introduced by applications accessing the database.

On the negative side, the paper’s generality means that it can’t go into many specifics about how any individual DBMS functions, so it does have less applicability. As well, it claims that eventually consistent databases don’t lose very much in actual data safety, but the paper isn’t able to give very much data to back its claims.




Review 12

As the CAP impossibility result
shows that it is impossible simultaneously to achieve availability and to ensure consistency in the presence of partial failure (partitions), eventual consistency is developed as a weaker consistency models that fit in the industry scenarios that requires low latencies. Eventual consistency achieves low latency by sacrificing consistency that guarantees all users see the same results all the time, while tolerates some users see the results first and finally all users see the same results. By doing that, eventual consistency achieves high availability. According to the evaluation of the paper, although eventual consistency is a weaker consistency model that only realizes liveness but no safety, it acts like a strong consistency model for the majority of the time, and it’s worthwhile doing so considering the availability and performance it has.
Strengths:
(1) Low latency and availability.
(2) Easy to implement
(3) Suitable for industry scenarios and acts as a strong consistency for the majority of the time
Weak points:
(1) It does not guarantee safety theoretically and sacrifices consistency



Review 13

This article discusses eventual consistency. Eventual consistency provides few guarantees. It guarantees that, if no additional updates are made to a given data item, all reads to that item will eventually return the same value. The article focus on three questions and preliminary answers. Questions are how eventual is eventual consistency, how should one program under eventual consistency, and is it possible to provide stronger guarantees than eventual consistency without losing its benefits. To answer the first question and quantify eventual consistency Probabilistically Bounded Staleness(PBS) is used. PBS provides an expectation of recency for reads of data items. This allows measuring how far an eventually consistent store’s behavior deviates from that of a strongly consistent, linearizable store. The degree of inconsistency is determined by the rate of anti-entropy. If replicas constantly exchange their last-written writes, then the window of inconsistency is bounded by the network delay and local processing delay at each node. If replicas delay anti-entropy, then the delay is added to the window of inconsistency. Using our PBS consistency prediction tool, consistency of three eventually consistent stores running in production is quantified. PBS models predicted that LinkedIn’s data stores returned consistent data 99.9 percent of the time within 13.6 ms, and on SSDs within 1.63 ms. These eventually consistent configurations were 16.5 percent and 59.5 percent faster than their strongly consistent counterparts at the 99.9th percentile. This result verified the consistency of real-world eventually consistent stores.

Answering second question of how to program under eventual consistency, programmers used to use design for compensation. Compensation is error-prone and laborious, and it exposes the programmer to the effects of replication. An alternative is design by CALM theorem. CALM means consistency as logical monotonicity, which means that programs that are monotonic, or compute an ever-growing set of facts, and do not ever “retract” facts that they emit can always be safely run on an eventually consistent store. CALM tells programmers which operations and programs can guarantee safety when used in an eventually consistent system.

For third question, it is possible to achieve the benefits of eventual consistency while providing substantially stronger guarantees, including causality and several ACID properties from traditional database systems while still remaining highly available. The systems with causal consistency can achieve this. Causal consistency guarantees that each process’s writes are seen in order, that writes follow reads, and that transitive data dependencies hold. In summary, eventual consistency improves availability and performance at the cost of semantic guarantees to applications.

I like this article because it points out the three questions it focuses on first and then answer each question in sequence. This makes the structure of the article very clear and easy to follow. The downside of the article is the format of paper, especially the font sizes of titles and subtitles. It is better to make them look more different.


Review 14

“Eventual Consistency Today: Limitations, Extensions, and Beyond” (2013) is an ACM Queue article by Bailis and Ghodsi discussing eventual consistency and why it has become popular in practice. They first provide background on the motivation for weaker consistency options: Brewer’s CAP theorem says that a system cannot be consistent and highly-available when there partial failures. Therefore, distributed system researchers/designers developed alternatives, in particular eventual consistency, that guarantee high-availability and “eventually consistent” data -- that “eventually” all nodes in the distributed system will have the correct and most up to date data. These highly-available systems also provide low-latency. The authors present experimental results demonstrating in eventually consistent production systems that data becomes consistent quickly, and with lower latency if instead strongly consistent; for example, their Probabilistically Bounded Staleness (PBS) tool predicted that LinkedIn’s data store would achieve 99.9% consistent data within 13.6ms (with 16.5% increase in speed), Yammer’s data stores would achieve 99.9% consistent data within 202ms (with 81.1% reduced latency). Results like these tend to be “good enough” for application developers. The article also discusses how application designers may consider whether to use an eventually or strongly consistent approach; it depends on the use case: on the benefit of low latency versus the cost of inconsistency anomalies. Designers may also consider whether the application has logical monotonicity. Application developers can also have certain guarantees (CALM and ACID 2.0) when using CRDTs to program their system.

This article was easy to read, explaining research in eventual consistency at the conceptual level and frequently tying it back to real-world examples and motivations. I appreciated this as a non-expert in distributed systems/DBMSs. It is also nice that the article discusses 1) how eventual consistency practically often has similar results to strong consistency (in terms of time to become consistent), but 2) also what the limitations of eventual consistency (e.g., staleness guarantees not possible, arbitrary global correctness constraints not possible).

I think it could have been nice to have a timeline of the advances in eventual consistency, what commercial systems used it, what tools were available for measuring it, etc.



Review 15

This paper discusses "eventual consistency", which is a new concept to me, just know the concept in the Dynamo paper. Eventual consistency is a kind of large-scale weak database design that only permits all read to one item will be eventually the same, no given time is guaranteed.

The paper first gave the history and concepts of eventual consistency. The concept originated from the CAP theorem that it is impossible the achieve availability and consistency at the same time. Then the paper gave the idea that eventual consistency is easy to implement. Then the paper presents the question that what is the eventual state of the database. It is the hole inside the definition of eventual consistency. They gave another definition but further raised another question for the new definition, what values can be returned before the eventual state of the database. After raising these questions, the paper starts to discuss the "eventual" property of eventual consistency, giving metric, mechanisms, probabilistically bounded stainless. After that, the paper discusses the implementation of eventual consistency. At last, the paper gives the compensation and limitations of eventual consistency.

There are several strong points of the paper. First, the paper gave a complete discussion on the concept "eventual consistency", from the history, definition to the limits we have now reached. Second, the questions raised in the paper really matters when designing "eventual consistency". There are many aspects we should take into consideration and choose what to trade off to meet the requirements.

The weak point of the paper is that the layout of the paper is somehow not easy to follow to me. The hierarchy of the paper shown by font size is not good for readers. Also, through the 11 pages of the paper, there is no figure nor tables to help to illustrate the ideas of the paper. I don't like papers without figures or tables.


Review 16

The eventual consistency is an interesting theorem proposed under the background of distributed scenario, it is because that distributed system requires always on, the highly available operation cannot guarantee the illusion of coherent, consistent single-system operations. Based on this, the distributed system architects frequently dropped the strong guarantee and then the eventual consistency is introduced. This paper provides a pragmatic introduction to several developments on the cutting edge of the eventually consistent system. This is definitely an important issue nowadays, distributed services are almost everywhere to handle the high concurrent request and guarantee high availability and scalability. The goal of this paper is to provide a necessary background for understanding both how and why eventually consistent systems are programmed, deployed and evolved, as well as where the systems of tomorrow are heading. Next, I will summarize the key idea of this paper with my understanding.

The main motivation of eventual consistency is the CAP theory. Based on CAP theory, it is impossible to guarantees consistency, availability and fault tolerance at the same time. As a result, the eventual consistency has evolved to provide high availability and performance by sacrificing consistency. Eventual consistency involves replica nodes performing a process called anti-entropy which involves exchanging of information with one another about which update they have seen. Then replica nodes choose a win value, by using a simple rule like last writer wins. The paper also talked about why eventual consistency is widely applied even if there is no safety guarantee. Even though eventual consistency doesn’t provide safety, in practice, many applications encountered a stronger consistency while executing in such a system. In addition, by using different prediction and measurement techniques, it is possible to quantify the behavior of different applications. When using such techniques, these systems appear strongly consistent at most of its time. Besides, from a practical view, it also presented how to write programs under eventual consistency. It is difficult to reason about the system with no guarantee of consistency. The authors discussed two approaches to build this system. The first one is called compensation which allows correct mistakes retroactively, even if it doesn’t guarantee that mistake will be made. The basic idea of the second approach is that as long as a program is limited in using only those data structures that avoid inconsistencies, there will be no any consistency violations. In addition, the paper discussed that the stronger guarantees can be achieved while maintaining availability, using techniques like causality and transaction properties from the design of traditional DBMS.

Generally speaking, this is a great survey like paper that introduces how to deal with eventually consistent infrastructure nowadays. I like the way in which the authors write this paper, this paper is easy to read and understand, they express complex ideas in a natural way which is very nice. In this paper, they provided some good insight when dealing with consistency problem in distributed systems, by following this trend people can penetrate deeper into this area and make more discussion and design. The trend provides new direction in distributed DBMS for weaker consistency property by abandon traditional ACID properties.

The downsides of this paper are minor. First of all, I think in this paper they only mentioned several good examples of conflicts avoidance, it looks like some optimistic approach. However, they lack discussing common strategies for conflict resolving. Besides, it also does not talk about the issues caused without making sure safety, because, for many distributed algorithms, safety takes first priority. Besides, the most discussion in this paper is based on real examples like Dynamo, it might be better if they can provide more examples from a different point of views.



Review 17

This paper is a survey on the relationship between consistency and availability for distributed storage systems, and it particularly focuses on a type of consistency called eventual consistency. Eventual consistency guarantees that at some time in the future, all reads to a particular item in a distributed store will return the same value. There are no guarantees about what it could return in the meantime, or how long it might take for this convergence to happen. The advantages and disadvantages of this consistency model are very intuitive:

Disadvantages:

1) This model is extraordinarily weak and doesn’t disallow a great number of anomalies that other consistency models that we have seen protect against. In a system that needs to be consistent at all times (like the state criminal records system we saw as an example earlier), this consistency model would not suffice the needs of the system.

2) Even in systems where inconsistency can be tolerated for brief periods of time, this can put more work on the developer to deal with and “fix” problems that arise from these inconsistencies with balancing code (for example, things like assigning overdraft fees).

Advantages:

1) Eventual consistency allows a distributed system to be much more available. It is commonly accepted that a system cannot be fully consistent and highly available, and so in systems where availability is paramount, weakening the consistency model is necessary. Eventual consistency is so weak that it allows for a much better availability status than other models we have seen can provide.

2) Many systems do not need the strong consistency guarantees that other models provide. It is relatively easy in many cases to write code on the application side to deal with minor inconsistencies that happen and resolve them. Moreover, these inconsistencies typically do not happen very often (the authors built a tool called PBS that estimates the level of consistency and how long it takes things to converge to a consistent state) in systems.

The paper itself was very easy to read, and it seemed like it provided reasonable breadth and depth for a survey on this topic. No figures were present but the topic didn’t really need any to clarify the ideas, in my opinion.