Most of the successful software out there involves a series of phases of development, such as requirements gathering and prototyping, that are put together to develop the software. These phases are discrete and often performed concurrently. Often there is an intertwining between the phases, which makes it inevitable to return to the earlier phases to make some changes according to the results obtained in the later phases. This type of a model, in which multiple phases are performed concurrently, can be coined as a concurrent model. Some examples of concurrent models in software engineering will be discussed in this lesson.
In the waterfall model, the development of the software works linearly and sequentially. This model is also called the classical waterfall model. The waterfall model is one well-known version of the software development life cycle for software engineering. The development phases are linear and sequential, which signifies its nature. Once water begins its journey down a mountain, it can't return back. It's the same with the waterfall model. Once a phase of development is completed, the process proceeds to the next phase, and there is no turning back.
The prototype model suggests that, before carrying out the development of actual software, a working prototype of the software should be built. A quick design is carried out and a prototype is built, after which the developed prototype is submitted to the customer for evaluation. Based on customers' feedback and requirements, the prototype is refined and modified. This process continues until the customer approves the prototype. The actual system is then developed using an iterative waterfall model. The finished software has more functioning capabilities, is more reliable, and gives better performance compared to the prototype model.
A prototype model is typically used where there's a lot of interaction between the software and the users. A good example of this would be an online web interface with a very high amount of interaction with the end users.
The spiral model was introduced by Barry Boehm in 1986. Spiral model activities are organized in a spiral and have many cycles. This model combines the prototype model and the waterfall model.
The spiral model is used when the project is huge, requires months of development, and follows a series of releases. Each release is like an updated version of the software. Usually, software updates or change requests follow the spiral model.
Let's take a couple of moments to review. Concurrent models are those models within which the various activities of software development happen at the same time, for faster development and a better outcome. The concurrent model is also referred to as a parallel working model. The waterfall model, introduced by Winston W. Royce in 1970, provides a picture of phases working linearly and sequentially. In the prototype model, a working prototype of the software is made before the actual software is built. The spiral model, introduced by Barry Boehm in 1986, combines the methods of the prototype model and waterfall model. Each of these models has their own advantages and disadvantages, ranging from the waterfall model's relative simplicity to the prototype model's low reliability to the spiral model's time-consuming nature. You just have to know what your project is in for before selecting the model you want to use, and examining all the advantages and disadvantages we covered in this lesson should help you do that.
The actor model in computer science is a mathematical model of concurrent computation that treats an actor as the basic building block of concurrent computation. In response to a message it receives, an actor can: make local decisions, create more actors, send more messages, and determine how to respond to the next message received. Actors may modify their own private state, but can only affect each other indirectly through messaging (removing the need for lock-based synchronization).
The actor model originated in 1973. It has been used both as a framework for a theoretical understanding of computation and as the theoretical basis for several practical implementations of concurrent systems. The relationship of the model to other work is discussed in actor model and process calculi.
According to Carl Hewitt, unlike previous models of computation, the actor model was inspired by physics, including general relativity and quantum mechanics. It was also influenced by the programming languages Lisp, Simula, early versions of Smalltalk, capability-based systems, and packet switching. Its development was "motivated by the prospect of highly parallel computing machines consisting of dozens, hundreds, or even thousands of independent microprocessors, each with its own local memory and communications processor, communicating via a high-performance communications network." Since that time, the advent of massive concurrency through multi-core and manycore computer architectures has revived interest in the actor model.
Following Hewitt, Bishop, and Steiger's 1973 publication, Irene Greif developed an operational semantics for the actor model as part of her doctoral research. Two years later, Henry Baker and Hewitt published a set of axiomatic laws for actor systems. Other major milestones include William Clinger's 1981 dissertation introducing a denotational semantics based on power domains and Gul Agha's 1985 dissertation which further developed a transition-based semantic model complementary to Clinger's. This resulted in the full development of actor model theory.
Major software implementation work was done by Russ Atkinson, Giuseppe Attardi, Henry Baker, Gerry Barber, Peter Bishop, Peter de Jong, Ken Kahn, Henry Lieberman, Carl Manning, Tom Reinhardt, Richard Steiger and Dan Theriault in the Message Passing Semantics Group at Massachusetts Institute of Technology (MIT). Research groups led by Chuck Seitz at California Institute of Technology (Caltech) and Bill Dally at MIT constructed computer architectures that further developed the message passing in the model. See Actor model implementation.
The actor model is characterized by inherent concurrency of computation within and among actors, dynamic creation of actors, inclusion of actor addresses in messages, and interaction only through direct asynchronous message passing with no restriction on message arrival order.
There are also formalisms that are not fully faithful to the actor model in that they do not formalize the guaranteed delivery of messages including the following (See Attempts to relate actor semantics to algebra and linear logic):
The first models of computation (e.g., Turing machines, Post productions, the lambda calculus, etc.) were based on mathematics and made use of a global state to represent a computational step (later generalized in [McCarthy and Hayes 1969] and [Dijkstra 1976] see Event orderings versus global state). Each computational step was from one global state of the computation to the next global state. The global state approach was continued in automata theory for finite-state machines and push down stack machines, including their nondeterministic versions. Such nondeterministic automata have the property of bounded nondeterminism; that is, if a machine always halts when started in its initial state, then there is a bound on the number of states in which it halts.
Edsger Dijkstra further developed the nondeterministic global state approach. Dijkstra's model gave rise to a controversy concerning unbounded nondeterminism (also called unbounded indeterminacy), a property of concurrency by which the amount of delay in servicing a request can become unbounded as a result of arbitration of contention for shared resources while still guaranteeing that the request will eventually be serviced. Hewitt argued that the actor model should provide the guarantee of service. In Dijkstra's model, although there could be an unbounded amount of time between the execution of sequential instructions on a computer, a (parallel) program that started out in a well defined state could terminate in only a bounded number of states [Dijkstra 1976]. Consequently, his model could not provide the guarantee of service. Dijkstra argued that it was impossible to implement unbounded nondeterminism.
Messages in the actor model are not necessarily buffered. This was a sharp break with previous approaches to models of concurrent computation. The lack of buffering caused a great deal of misunderstanding at the time of the development of the actor model and is still a controversial issue. Some researchers argued that the messages are buffered in the "ether" or the "environment". Also, messages in the actor model are simply sent (like packets in IP); there is no requirement for a synchronous handshake with the recipient.
A natural development of the actor model was to allow addresses in messages. Influenced by packet switched networks [1961 and 1964], Hewitt proposed the development of a new model of concurrent computation in which communications would not have any required fields at all: they could be empty. Of course, if the sender of a communication desired a recipient to have access to addresses which the recipient did not already have, the address would have to be sent in the communication.
As opposed to the previous approach based on composing sequential processes, the actor model was developed as an inherently concurrent model. In the actor model sequentiality was a special case that derived from concurrent computation as explained in actor model theory.
Hewitt argued against adding the requirement that messages must arrive in the order in which they are sent to the actor. If output message ordering is desired, then it can be modeled by a queue actor that provides this functionality. Such a queue actor would queue the messages that arrived so that they could be retrieved in FIFO order. So if an actor X sent a message M1 to an actor Y, and later X sent another message M2 to Y, there is no requirement that M1 arrives at Y before M2. 2b1af7f3a8