Computation models and specification methods seem to be worlds apart. The evolving algebra project started as an attempt to bridge the gap by improving on Turing's thesis [G1, G2]. We sought more versatile machines which would be able to simulate arbitrary algorithms in a direct and essentially coding-free way. Here the term algorithm is taken in a broad sense including programming languages, architectures, distributed and real-time protocols, etc.. The simulator is not supposed to implement the algorithm on a lower abstraction level; the simulation should be performed on the natural abstraction level of the algorithm.
The evolving algebra thesis asserts that evolving algebras are such versatile machines. The thesis suggests an approach to the notorious correctness problem that arises in mathematical modeling of non-mathematical reality: How can one establish that a model is faithful to reality? The approach is to construct an evolving algebra A that reflects the given computer system so closely that the correctness can be established by observation and experimentation. (There are tools for running evolving algebras.) A can then be refined or coarsened and used for numerous purposes. An instructive example is described in [B] by Egon Börger who championed this approach and termed A the ground model of the system. The use of the successive refinement method is facilitated by the ability of evolving algebras to reflect arbitrary abstraction levels. This has been convincingly demonstrated by Börger and Rosenzweig in [BR]; a simpler example is found in [GH].
Evolving algebras have been used to specify languages (e.g. C, Prolog and VHDL), to specify real and virtual architectures (e.g. APE, PVM and Transputer), to validate standard language implementations (e.g. of Prolog, Occam), to validate distributed protocols (see examples in Parts III and IV of this book), to prove complexity results [BG], etc.. See Börger's annotated bibliography on evolving algebras in this book and the proceedings of the first evolving algebra workshop in [PS].
Here we extend the definition of evolving algebras given in the tutorial [G2] (henceforth ``the tutorial''). For the sake of brevity, the term ``evolving algebra'' is often shortened to ``ealgebra'' (pronounced e-algebra) or ``EA''; the latter term is used mostly as an adjective. Static algebras are discussed in . Sequential ealgebras are discussed in ; first we define basic ealgebras and then we equip them with the ability to import new elements. Nondeterministic sequential ealgebras and some other simple extensions of basic ealgebras are discussed in , parallel ealgebras are discussed in , and distributed ealgebras are discussed in which can be read immediately after . Admittedly this guide is harder to read than the tutorial, and we intend to write a more popular version of the guide.
Now let us return to the EA thesis. In the tutorial, we defined sequential ealgebras and sketched a speculative philosophical ``proof'' of the sequential version of the thesis. The definition of sequential ealgebras and the sequential EA thesis have survived several years of intensive application and experimentation. As a matter of fact, we (the EA community) seem to have run out of challenges.
The situation with non-sequential computations is more complicated. It seems that, for every reasonably understood class of algorithms, there is a natural extension of the basic EA model that ``captures'' that class. That form of the EA thesis also has survived several years of intensive application and experimentation. The philosophy and guiding principles of the EA approach seem quite stable. However, at the current stage of computer science, there is yet no clear understanding of what parallel, distributed or real-time algorithms are in general. Thus, the definitions of parallel and distributed ealgebras given below are necessarily tentative. They provide a foundation for existing EA applications and reflect my anticipation of things to come. (Many existing applications, including those in this volume, were done before this guide have been completed; the terminology there may reflect earlier versions of the guide.)
We try to derive our definitions from first principles. Unfortunately some arbitrariness is inescapable and one has to balance the clarity and simplicity versus programming convenience and efficient execution. When one thinks mostly about applications, as we do, there is a tendency to prefer programming convenience and efficient execution. This is a dangerous trend which leads to an idiosyncratic programming language. For future reference we formulate the following principle: