Although a random walk is expected to come to an end, the possibility of enhancing the process was being investigated. Instead of rejecting an event because of unsatisfied conditions (e.g. guards, predicates, matching), an attempt would be made to
The difficulty that arose was the re-execution of exactly
the same sequence of actions to return to the deadlock state. To illustrate this, imagine the following behaviour B
:
enter?x:nat; ( ha; ha; ha; offer!x[x<1]; ProcessSuccess[enter,offer] [] ha; ha; ha; offer!x[x<1]; ProcessFailure[enter,offer] )with a deadlock found after executing the trace
g?2;ha;ha;ha
. Even if a proper value for enter?x:nat
could be determined, there would be no way of knowing whether the next actions (ha;ha;ha
) should be taken from the left or the right handside of the choice
operator. There existed no mechanism that would ensure to have returned to a previously visited state.
The first building block of GOE
was to be invented. Besides the event itself, each derivation of the behaviour would also provide information related to the inference rules applied
in order to derive the resulting event. This information was called the Dynamic Derivation Path
(DDP). For instance, given the behaviour B'
:
ha; ha; ha; offer!x[x<1]; ProcessSuccess[enter,offer] [] ha; ha; ha; offer!x[x<1]; ProcessFailure[enter,offer]the derivation of the first event
ha
from the left handside of the choice would have the DDP choice-left
, and the derivation of the first event ha
from the right handside of the choice would have the DDP choice-right
. This was sufficient to solve the problem tied with the enhancement of the random walk technique. But why stop here?Another application of the DDP was to provide a mechanism for re-execution in ISLA. There would be no more need to save an entire behaviour in order to be able to return to that state later on. The DDP of each event executed would convey the necessary information to replay them, this being done efficiently since the application of the inference rules then becomes deterministic. And there's more!
The next step consisted in somehow inverting the process. Instead of just using the DDPs of the executed events in order to replay a trace, it should be possible to build the DDP from a statical analysis of the behaviour. That is, with a particular goal in mind, the system could
GOE was born.